source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
Running preamp tube filament on 5v? Is it safe and not harmful to run a preamp tube filaments at 5v? The tube is ECC85. I built a tube preamp in a DIY portable guitar amp. It's very similar to Matsumins valvecaster with a few slight changes so the anode voltage is only 12-13v depending on the battery charge level. It's a starved-cathode design but I really dig the warm slight distortions it gives to a guitar. My schematic is very similar to matsumin, capacitors I believe are a little different and the gain pot is replaced with 50k resistor and R3 is 270k (works a lot better for some reason). I am considering this because right now I'm using LM317 but it wastes a lot of power from the battery and also I want to put a USB charger (yes, that's quite silly) so 5v 2A would be really useful. And if it's safe would the output level be somewhat lower or I should worry about more distortions from the tube as it is a starved-cathode design. What's your advice and perhaps you have some recommendations on chips that have quite good efficiency of providing 5v. Again, the main part of the question is still whether this would harm tubes and how to get better efficiency for heating them. <Q> and R3 is 270k (works a lot better for some reason). <S> This speaks volumes. <S> Filaments:Using 5V on 6.3Vrms heaters isn't impossible. <S> However, it shouldn't be noticeable given the low anode voltage. <S> Thus your tube will likely live as long as a normal tube would. <S> You may find that is sounds more distorted than if you heated it properly. <S> Power: <S> USB 5V charger for filaments - sound like a bad idea, mostly because you're much more likely to get ground loops from using two discrete chargers. <S> So using a linear regulator would be the easiest solution. <S> The only other solution being to use a buck circuit to take 9V down to 6.3Vdc <S> -> 5Vdc (which ever you happen to want to use). <S> This is about the time where someone points out that they're using a 9V battery and that means that you won't get ground loops. <S> Except that every foot pedal design I've seen (and I've seen many) have a 9V wall Jack and a switch so that they don't need to have a battery. <S> So when you have the wall wart plugged in, you'll likely hear ground loop hum. <S> When you're on battery, you wouldn't hear it (assuming that you were in fact using the 5V USB charger). <S> If you want my honest opinion, which you're getting anyways, don't limit this design to a battery. <S> There's a reason we don't use these tubes with batteries (they do have battery tubes, these are not those) because they take so much power to heat properly. <S> Take the time to find a good high power 9V wall-wart (think 1.5A-2A). <S> Create a buck circuit to drive the heaters at 6.3Vdc <S> (heck, go nuts and drive them at 6.3Vrms <S> some people swear that AC sine wave heating sounds best). <S> And while you're at it, go read up on tube amp designs like from this UK website . <A> Although I haven't tried it, I don't think a 12 volt heater would get hot enough on 5 volts to allow the tube to work. <S> However, the 12AU7 has a center-tapped heater, allowing it to officially be used on either 12.6 volts, with the two sections in series, or on 6.3 volts with the sections in parallel. <S> The 6.3 volt connection would probably work as well on 5 volts as the 12.6 volt connection does on 9 volts. <A> Thank you all for your help! <S> I've decided to stop playing and wasting energy with 6.3v heaters. <S> Going to go for a 12AT7 which is a very very similar tube to the current ECC85 inside the amplifier. <S> Found a very good deal on NOS Telefunken ECC81. <S> ECC81/12AT7 can be heated directly from 12v power supply <S> so no more hassle with energy wasting <S> LM317 <S> I believe this is a far better solution than going through a LM317. <S> Especially in efficiency! <S> If I use LM317 to heat <S> ECC85 <S> I'm using more than 300mA to heat the tube <S> BUT if I am using ECC81 <S> it drops in <S> half - only 150mA. <S> I don't think that using LM317 is a bad idea in a whole <S> but when it comes to portable designs and efficiency.. <S> well it really is quite terrible at that
You'll produce less heat, which means your cathode is more likely to pit and degrade.
How does an inductor block frequency? I know the basics of self-inductance, and how an inductor creates a voltage/current that opposes the increase or decrease in current. I can't find anything on how it reacts to/blocks certain frequencies. Can anyone tell me how inductors select or block certain frequencies? <Q> Ideal inductor's impedance is given by: $$ <S> Z_L = j\omega L = <S> j2\pi f <S> L$$ <S> This means that its impedance (which can be interpreted as "current opposition") is proportional to signal frequency. <S> Therefore it acts as short circuit to DC current (\$f = 0\$, and \$Z = 0\$) and its impedance increase linearly with frequency. <S> The larger the inductance \$L\$, the "faster" this increase goes. <S> If you're interested in the physical background of this, such behavior model can be derived from Maxwell's equations , particularly from the third equation (Faraday's law of induction). <S> An example application of using inductors in filters is given in the following circuit: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> This is a low-pass filter, with cut-off frequency given by $$f_c = \frac{1}{2\pi\sqrt{LC}}$$ <S> Other filter types can be obtained exploring the same properties (high-pass from switching L and C, bandpass with other topologies, etcetera). <A> If you put a voltage across an inductor, it is going to take that energy and start to 'wind up' the magnetic field around it. <S> The magnetic field can't wind up forever. <S> It 'winds up' with the voltage being proportional to the derivative of the current. <S> $$V_L = <S> L\frac{dI}{dt}$$ <S> This also means that higher frequencies will not pass through the inductor because a sudden change will be impeded by the magnetic field. <A> An inductor does not "select or block certain frequencies". <S> The voltage across it is the negated time-derivative of the current applied to it multiplied by a constant (the inductivity). <S> The negated time-derivative of sin(2 pi f t) is - 2 pi f cos(2 pi f t), so <S> you get a phase shift of 90 degrees, and a frequency-dependent factor of - 2 pi <S> f. <S> Taken together, one calls this a complex impedance of 2 pi j f L (j being the engineers' name for sqrt(-1) as i is already taken for currents). <S> This just favors admitting higher frequencies but does not single out any frequency. <S> Resistors are always involved in real circuits (any non-superconducting elements have finite resistance). <S> But to get actual selectivity with passive components, you need to work with capacitors as well. <S> If you put an inductor and a capacitor in series, there will be one frequency where the voltages across both capacitor and inductor, given the same current, will just cancel out. <S> This will effectively form a shortcircuit for this frequency, admitting arbitrary currents for a small applied voltage, basically only limited by parasitic resistance. <S> The individual voltages across inductor and capacitor will still be large, requiring them to be specified accordingly. <S> Similarly, if you put an inductor and a capacitor in parallel, there will be one frequency where the currents across both capacitor and inductor, given the same voltage, will just cancel out. <S> This will effectively block currents for this frequency. <S> Again, inductor and capacitor will still have to withstand the current through the individual components even though the net current will be almost zero. <S> Filters specialized on a single frequency (either admitting or blocking it) can be made very selective with few components as long as those components are close to ideal capacitors/inductors without large parasitic resistances. <S> However, crossover networks (like used in loudspeakers) have a whole passband and stopband rather than single frequencies to be let through or blocked. <S> In this situation, you'll need more components for sharper transitions, regardless of the components' quality. <S> For HF filtering purposes, the inductors are the most conspicuous components and those that are used for tuning the filters (by screwing the ferrit core further in or out). <S> But they won't do the trick without accompanying capacitors either.
To actually single out frequencies, you need to combine the inductor with other components.
What is the valid capacity ranges of Lithium Ion 18650 batteries? I'm seeing wildly varying capacities that are seemingly fake on 18650 cells, such as these "9,800 mAh" ones on eBay , but are ~$1 / cell. However, 'genuine' cells from Panasonic and similar name brands seem to top out around 3,500 mAh or so, but cost more like $8 / cell. How can I tell true capacity apart from the lies? <Q> However lets for a moment think about where these fake cells might come from: <S> Cheap companies that have little expertise in bleeding edge battery technology Contracted factories that just leave the machines running for a while after they produce the parts for high end battery companies <S> In the first case you can be sure that the real capacity is on a much lower level. <S> In the second case the capacity is at most at the level of the best competitors on the market. <S> In conclusion: anything that is above the well established brand names is a blatant lie. <S> Anything that is at their level might be a lie or could be true, no way to figure out than to measure it. <A> If the cell is actually genuine (no guarantee about that ) then it should follow the datasheet. <S> Note that genuine and unmodified 18650 cells are not generally sold through legitimate distributors such as Digikey or Avnet, probably for liability reasons. <S> Most of the ones you see with unknown brand names (anything with 'fire' in it, IME) will have a fraction to a small fraction of the claimed capacity. <S> Maybe 1000mAh rather than 4000 or 6000 or whatever they are claiming this week. <S> The crap ones will be significantly lighter than the genuine cells, and probably made with inferior materials internally. <S> If you're lucky (and they claim protection) <S> they'll have short-circuit protection polyswitches to reduce the chances of drama. <S> Undervoltage protection is also possible, but less common. <S> Many of the folks selling these online on eBay, Aliexpress etc. are shady criminals to begin with in that they're lying about the contents of the package and dropping it in airmail. <S> Extremely dangerous practice. <S> There are websites (probably flashlight related) that do some testing and you will have a better chance if you follow their recommendations. <S> You can also find some cells with protective circuits added from distribution. <A> At least these are fireproof: <S> These... maybe not. <S> Source: <S> fake 18650 teardowns . <A> I bought some 9800 mAh Li ion 18650 batteries on eBay and tested them with my genuine SkyRc charger/tester. <S> maximum capacities were between 990 and 1080 mAh each with a sample set of four cells. <S> Yes I was hoping for 2500 mAh, but for two dollars you get what you pay for. <S> Where weight or size is not an issue, you are correct; The liars offer cheap energy capacity at even 10% of the advertised capacity. <S> All the usual caveats apply with purchasing substandard products so good luck. :)
Obviously when you already have one in your hands, you can measure its true capacity.
Zynq - Configuring SPI clock to idle high I am trying to use the SPI0 component of a Zynq XC7Z010 to read data from a 12-bit rotary encoder which uses an SSI protocol. I have a small example project set up in Vivado which enables the SPI0 to use EMIO ports and sets the pins I want to use. I also have the xspips driver working and am able to receive data from the encoder. The problem I am facing is that I could only set up a clock signal which is idle low, while the encoder expects an idle high clock. Due to this, the first bits received are unusable, and I had to do some bit-manipulation to recover these bits (the encoder starts repeating the same data again after the 12 bit transmission). With an oscilloscope I tested out that the sent out clock signal is always idle low, whether the CLK_ACTIVE_LOW option is set or not for the SPI, so I concluded that that option is only for how the Zynq interprets the clock signal. How can I have a clock signal that is high by default? Do I have to manually invert it in the Vivado generated VHDL wrapper, or is there a simpler solution? Measured clock signal for a 2 byte transfer when ACTIVE_LOW is enabled: and when it is disabled: <Q> Your first oscilloscope trace (ACTIVE_LOW enabled) seems to show the clock going high-Z when idle. <S> Note the slow decay. <S> Try adding a fairly week pull-up resistor to the clock line, maybe 10K, and I think you'll ge what you want. <A> Try using something like this . <S> It should work for a drop-in fix ;) <A> I also had some trouble with the Zynq SPI peripheral and as DoxyLover pointed out it has to do with the clock line being set to High-Z in Linux when data isn't being transferred. <S> I solved this by bringing out the tristate pins for the clock line and manually feeding them into an IOBUF in my HDL, where instead of using the ZYNQ SPI tristate line I tied it low so the clock is always an output (actively driven). <A> I believe the tri-state answer is key here. <S> Sending SPI to EMIO exposes a lot of signals, most of which aren't needed in a given application. <S> This includes tristate signals for just about everything. <S> This is for two reasons. <S> First the SPI can be configured as a master or a slave. <S> Second it can be configured to work with multiple masters. <S> Both of these require the IP to tri-state some signals when not in use. <S> To avoid tri-stating the clock, simply expand the signals and don't use the tri-state control lines.
If you're having lost bits due to your clock xource being active-high when you need an active-low clock source, the logical answer would seem to be to add an inverter inline between your clock source & the clocked device that needs the inverse clocking waveform.
Digitally Pressing a keyfob button with a microcontroller For a project I'm working on, I need to press a car keyfob button with a microcontroller. So I soldered two wires to the circuit thing on the keyfob so I can send signals to it. This is what it looks like. This is what I've tried so far: 1) I've tried writing 3.3V to one of the wires I soldered to with a netduino 3.3V digital I/O pin. I tried it by directly connecting it to the digital output port, and also with a 100 Ohm, 2.35kOhm, 4.7kOhm, and 9.4 kOhm resistors. 2) I've tried shorting the cables to each other, and that didn't do anything either. I also have made sure the microcontroller and the netduino had a common ground. I have no idea why it doesn't work? Especially when I short it. When I take the rubber part of the keyfob, and press it against the circuit it works every time. EDIT: I also know the keyfob isn't fried because when I power it with the netduino (3.3V) or the lithium battery (3V) and press the circuits with the rubber, it works just fine. So the keyfob is still good. Does anyone have any ideas? Thank you so much! -Phil <Q> A very simple switch hack that works with mp3 players uses a simple NPN transistor switch. <S> I don't see why it shouldn't work with a keyfob. <A> You can either: 1) <S> reverse engineer how the button is connected. <S> For example if one side is connected to the ground of the lithium battery then you could simply place an NPN or NMOS across the button contacts. <S> Connect grounds of key and netduino together. <S> Maybe also replace the key's battery by connecting the +3 V to the 3.3 V from the netduino. <S> Switch on/off the NPN or NMOS from the netduino via a 10 kohm resistor (not needed when using an NMOS). <S> 1a) <S> when one side of the button is connected to +3V use a PNP or a PMOS instead. <S> 2) if it's not so clear or not connected like in 1) or 1a) <S> you could use an optocoupler to do the switching. <S> The input of the optocoupler is simply a LED which you can switch on/off from the Netduino. <S> The output of the optocoupler has a transistor that starts to conduct when the LED is on. <S> So you can simply connect the optocoupler's output to the key's button contacts. <A> Use a solid state relay like this: - This is just one of many types you could use - it's the G3VM-41AY1 from OMRON and quite readily available for less than £2. <S> Internally they convert light to voltage to drive two back-to-back FETs like this: <S> - Make sure you pick one with low capacitance across the output mosfets. <S> Omron make them with very low capacitance (less than 10 pF).
Optionally you can replace the 3 V lithium battery supply with the 3.3 V supply from the Netduino (9 out of 10 times that simply works).
Reducing give in bi-polar stepper motor axel I have the NEMA-11 bi-polar stepper motor ( http://www.active-robots.com/3322-0-28sth32-nema-11-bipolar-stepper-with-100-1 ). The issue is that I am able to turn the axel about 5-8 degrees without applying much force. This is an issue as what I am connecting it to can sometimes lightly pull causing the position to be inaccurate. I am new to stepper motors. What can I do to reduce this? <Q> What you describe is called 'backlash'. <S> The data sheet for the motor shows that it is a gearbox output, with a fractional degree step angle. <S> As such, it is unlikely you are moving the motor itself, all of the movement will be due to the tolerances of the gear teeth. <S> Preloading the output shaft with a spring (for non-continuous rotation) or a torque motor should remove most of the effect of backlash. <S> An alternative is to move away from stepper motors, use a servo with an absolute encoder on the output shaft, which will drive the motor such that the output shaft is always maintained in the wanted position. <A> I'm going to guess that, in the picture in the link, you are talking about the right-hand shaft. <S> This is the motor shaft, and you should not connect to that. <S> I believe you'll find that the left-hand shaft is much "stiffer" and you should connect to that one instead. <S> When not energized, stepper motors display what is called "detent torque", which can be quite low, and I suspect that is what you are dealing with. <S> The motor (on the left) drives a 100:1 gearbox on the right, and this produces an output with 1/100 the motion but 100 times the force (torque). <S> Backdriving the gearbox output should be pretty near impossible with finger pressure. <S> If I am wrong, and you are turning the right-hand shaft easily, you have a badly defective gearbox. <S> As for the motor shaft, try driving it with an appropriate driver at zero step rate, and you should get much more resistance to motion. <A> Backlash is a common problem in motion systems using gears or belts or pulleys. <S> A typical problem might be in a CNC milling machine where the user notices that when the direction is reversed that the cutter doesn't move until all the backlash has been taken up. <S> This causes cutting errors. <S> Many systems have built-in <S> backlash compensation and this approach may be good enough for your application. <S> To use backlash compensation your program needs to detect changes in direction of the motor and make a step move the correct amount to take up all the slack. <S> The exact number of steps would be determined by trial and error or measurement.
If you set the motor oscillating back and forward by the backlash amount the gearbox shaft should be just on the edge of moving.
RC filter on ADC voltage reference pins: overkill or necessary? I've been scouring the web for circuits suitable for reading RTDs using ADCs. In one document from TI it is suggested that the following filter be placed on the inputs of the voltage reference pins (REFP0 and REFN0) which matches the filter on the analogue inputs, to reduce noise: I haven't seen this recommendation in any other design and was wondering if it is really necessary? My second question: suppose an RC filter would in fact help on the voltage reference pins. I don't understand why REFN0 pin also needs a filter? Wouldn't it be better if I directly connected it to GND and removed the R and C circled in red? <Q> That RC is a good idea. <S> It's worth having, especially if you will be operating in an environment with EMI. <S> I suspect (although the datasheet doesn't say that explicitly) that the differential REF pins are connected (through a mux) to an InAmp. <S> One problem with InAmps is that their common-mode rejection at higher frequencies is poor. <S> InAmp can also rectify high frequency noise at its inputs, and that manifests itself as a DC offset. <S> To deal with that, low-pass filters are added in front of InAmp inputs to cut the high frequency components. <S> This is similar to what you see in fig. <S> 5-24 of the A Designer's Guide to Instrumentation Amplifiers . <A> A filter is certainly a good idea. <S> The converter is a delta-sigma ADC and most likely one based on a switched-capacitor topology. <S> In this case the reference is sampled onto a capacitor every clock cycle and should therefore have low impedance, so it is good to have a capacitor. <S> Due to the sampling process noise folding occurs and the filter reduces the noise bandwidth which is another reason to add the capacitor. <A> It is not necessary to have the reference filter, but may introduce more noise into your measurement. <S> If you take it out and it makes the ADC's measurement to noisy for the requirements of your design, then leave it in. <S> I've used this part without the radiometric current sensing, but that design used regular voltage references with no filtering. <S> The reference input current is 30nA so that means that it is high impedance (this is assuming you also keep it within these specs low end: AVSS – 0.1, high end: REFP – <S> 0.5 <S> or you'll hit the protection diodes). <S> And if you have your RTD on a cable, then I definitely would.
Personally I would put the filter in, lower measurement noise is always better.
Good approaches to implement more than one time-critical function using a microcontroller? What is the philosophy or approach taken to implementing highly time critical functions in microcontrollers, if there are any? I am working on a project involving outputting a precise square-wave waveform of varying frequency. I have done this using a timer and an interrupt function. However, even to implement this correctly, I had to calibrate the offset for the number of clock cycles taken during the interrupt service routine. I'd imagine this precision would be disturbed by having another such waveform running alongside (say the frequency needed to be changed at the exact same time). Having a microcontroller each, for every such time critical function seems wasteful. Take another example, of implementing a clock (as in hh:mm:ss) function. I can't imagine that every high level microncontroller/computer has a dedicated real time clock chip solely to keep track of the time. However, I find it hard to imagine it being accurately measured using the core processor, which is busy servicing a plethora of functions that come at asynchronous intervals in the meantime. I'd imagine the time counting would have offset errors, that change depending on the functions that are running. Is there a design process or approach to containing or giving a tolerance to the precision achievable? Or does anyone have any pointers or suggestions on where I could find more information regarding this? <Q> To output precise square waves, use the hardware. <S> Most microcontrollers have PWM generators built in that can do this. <S> You set the period and on time in clock cycles, and the hardware does the rest. <S> To change it to a new frequency, write the new period into the period register and half the period into the duty cycle register. <S> As for real time clock losing time due to other load of the processor, it doesn't work that way unless it is very poorly written. <S> Generally the hardware would be used to create a periodic interrupt that is some multiple of seconds, and the firmware divides further from there. <S> This works regardless of how busy the processor is, since the interrupt runs whenever it needs to. <S> As long as the interrupt routine takes a small fraction of the overall cycles, most of the processor is still applied to the foreground task. <S> There are also ways to keep time by polling at somewhat unknown intervals. <S> As long as this routine is run often enough so that whatever counter is used doesn't wrap between runs, no time is lost. <A> The keyword here is "hardware support". <S> For anything serious you'll need supporting hardware in the µC. <S> The most common integrated peripheral is timer circuitry which runs relatively precisely and without interference from other CPU operations. <S> Building on that, you can have many functions executing with medium-term timing as precise as your controller's clock source. <S> But: As you may have already experienced, besides medium- or long-term accuracy there's also always timing jitter involved in software handling of hardware events (including things like timer overflow). <S> This is caused by different possible states of execution at the time an event occurs that result in varying delays until the actual response to the event can happen. <S> Hence, the bottomline is: For anything with high-speed or near-zero jitter requirements, hardware support is essential. <S> Many hardware peripherals are included in most µCs, like UARTs &c, and the more powerful and costly <S> the µC is the more supporting hardware is usually built-in. <S> If your µC does not provide the hardware you'd need, you will indeed have to consider external, dedicated hardware for the task. <A> All microcontrollers have timers/counters specifically created to count and time events. <S> Aside from that, this really is a very broad question. <S> So there is no good answer. <S> The only true answer is experience. <S> Try it, profile it, stress it, fix it. <S> You have to identify areas of code with high usage. <S> 20% of software running 90% of time, means every instruction removed, improves performance. <S> Good design has always balanced hardware, software and memory. <S> This applies to all microprocessors, but especially microcontrollers. <S> Max out one or inefficiently use one and you will have a poor product. <S> As silicon densities have increased, more and more features are included in the hardware of microcontrollers. <S> But more features means more expectations. <S> Double onboard memory and <S> you will add some feature which uses it. <S> All ISRs have overhead, which is dependent on the registers used by the ISR. <S> If the latency to save the machine state is significant compared to servicing the ISR for highly time critical functions, your design may not be scalable. <S> Hence, the general consensus of answers to use hardware. <S> The use of software interrupts can decrease ISR machine state bloat. <S> // <S> Timer0 <S> ISRTemp = <S> Temp + 1if <S> (Temp == <S> 150) call Inc_Seconds <S> () <S> All registers for Inc_Seconds() must be pushed, when they are only used once every 150 cycles. <S> // <S> Timer0 <S> ISRTemp = <S> Temp + 1if <S> (Temp == 150) _Software_Interrupt...// <S> Software_Interrupt ISR call Inc_Seconds <S> () <S> Now the latency hit only occurs once every 150 cycles. <S> If you implement a real-time clock in hh <S> :mm:ss, does it matter if it is 50ms off. <S> No person would detect the error. <S> This certainly is not a real-time operating concern. <S> As for events which must occur at the same time. <S> Must they? <S> If they must then the hardware design must take care of it. <S> Otherwise, some software compromise must take place. <S> If you can't set two bits at one time, then set one bit. <S> Next instruction set the other. <S> Accuracy one clock cycle on RISC procesors. <S> I'd argue that was good enough. <A> For square waveform you should use a PWM peripheral that is PLLed with your XTAL using some kind of counter to know when to cycle (for setting the fewquency). <S> Every datasheet will tell you how to do this :) <S> For keeping time , yes, you will need a RTC to do it close to accurately unless you go assembler and author opcodes <S> so you know by hand the exact exec timing of each instruction in any exec path. <S> It will probably also shed new light on the tried and true 'goto is considered harmful' statement.
You have the hardware keep a count, and whenever you get around to updating the clock, you update it based on the total number of elapsed ticks. Do as much as you can with hardware, especially for highly time critical functions.
Safely pull 5V to GND on 3.3V GPIO I want to control an 5V relay board with an ESP8266 (which operates on 3.3V). Schematic of the board: IN2 is normally on 5V and must be pulled to 0V for enabling the relay. I saw people using an NPN transistor like this (second image, the relay board has the same pins but is not the Keyes_SRly) and also people connecting IN2 directly to an GPIO pin. Which one is safer/the right way? <Q> The ESP does not have 5V tolerant gpio. <S> If you do, the transistor method doesn't hurt either. <S> The transistor will work on both the same way, with a slight difference in current draw if you don't resize the resistor. <S> So in both case, the use of a transistor to control the optocoupler is preferred. <S> At a few cents for a small signal transistor like the 2n3904 or 2n2222 and a resistor, you ensure your five dollar ESP doesn't fry itself. <S> The trade off is space, but a TO-92 and a 1/4th Watt resistor are tiny. <S> There is really no downside to a transistor. <S> Update : There is some discrepancy between the schematic shown and the module shown. <S> They are not the same, maybe. <S> The first schematic has an active low optocoupler setup. <S> They look like and can sometimes be powered by a different voltage than the signal voltage. <S> The Keyes_SRLy pictured is simpler, no optocoupler (i.e. no isolation). <S> It's schematic is supposedly: <S> In which case the transistor base is directly broken out. <S> This can be directly connected to a simple GPIO. <S> You need to figure out which one you have. <S> The transistor setup shown won't work on the simpler Keyes_SRly relay module. <A> Your schematic is wrong: the Keyes SRly relay module does not have an opto-isolator (U3). <S> It has the transistor (Q3) and the resistor (R6), and the relay etc. <S> Try googling "keyes SRly relay" and you can easily find the schematic. <S> You can connect this directly to an output pin of the ESP8266. <S> There is no need for opto-isolators or extra transistors. <A> This is probably going to work for you. <S> It is similar to your second circuit. <S> When the GPIO is high, it turns on the transistor which activates the relay. <S> When the GPIO is low, the base of the resistor will be around 0V, so you don't have to worry about the 5V getting back into the wifi module. <S> The diode helps dissipate the magnetic field in the relay when it is turned off (and prevents damage to the transistor). <S> simulate this circuit – <S> Schematic created using CircuitLab
You may have a breakout board that includes 5V tolerant GPIO, in that case you can directly connect it, but if you don't the transistor method is better.
What does "Piggy" mean in this Schematic? What does the word "Piggy" mean in this schematic diagram?I found some sections of a schematic where a part is denoted as "Piggy".What does it mean? <Q> The block might be "piggy backed". <S> A piggyback board is an extension or a daughter board. <S> Thus in this case it means that the V35/X21 module is located on a daughter board. <S> Here an example: <A> For example, bluetooth uses 2.4GHz, which is really sophisticated stuff and could easily need more design work that the entire rest of the board. <S> So there are small bluetooth modules of the size of a post stamp available, which can be soldered onto your board. <S> They need some power and offer a serial interface, which is quite handy. <S> I have no idea what exactly this device on your schematic is, but the word piggy , the red box (and may be the word serial ) <S> makes me guess that it's something like this. <A> I believe the "Piggy" is referring to "Piggyback board" or sometimes called a daughter board . <S> What I think the schematic means is that the serial interface outlined in the red dashes is part of a piggyback board connected to the main circuit. <A> Most likely this refers to and is derived from the expression "piggy back". <S> Meaning of course to ride on the pigs back. <S> In more modern terms you may give a child a "piggy back" ride whereby they climb on your back and get a ride. <S> In the context of a circuit board environment a piggy back component or module would be one that mounts onto the main board and provides some function when installed. <S> You could even think of that component as "taking a ride there". <S> There are even some instances in electronics where some components may get installed by reworking the board after it has been built. <S> Some times the reworked components need to be mounted on top of other component in order to gain access to solder connection points. <S> Such components are commonly referred to as being mounted "piggy back" on the other component. <S> A simple example would be a case where it was found that some IC was missing its decoupling capacitor and the fix is to piggy back <S> the capacitor on top the IC chip and solder the leads to the IC chip leads. <A> Not all designers are deadly serious, some are quite whimsical when they name signals. <S> In a big commercial project, it's one of the jobs of the project leader, once he's finished smiling at the joke, to make sure names like these don't get out in the public literature. <S> No one wants to keep explaining, or apologising for, the joke. <S> As the block is labelled 'Serial Int', which is probably Serial Interface, I suspect the allusion is either to 'Squeak Piggy, Squeak', or to the general observation that pigs squeal when poked. <S> I think it's a signal that makes the downstream controller do something.
Piggy or piggy board usually refers to devices (or small boards) which are plugged / soldered onto larger boards.
How to drop 170V DC to 50V? I have old flash that generates about 170V on it's trigger when it's charged and ready to fire. When flash fires, voltage drops to zero, and rises to 170V as flash charges... I need that flash connected to 4N35 optocoupler. As i can tell from datasheets, it can withstand only 70V, so i need a way to drop voltage. So, whats the best way to do it? Use resistors? Power adapter? Or simply use other optocoupler? Schematic should like similar to this one: I am interested in what's going on with 4N35 which has F ( flash ) marking pin 5.Except, this one is obviously made for newer flashes, and they do not generate high voltages on their terminals. <Q> Use the 70 volt optocoupler to control a high voltage transistor which in turn will switch the 170v. <S> In this way your optocoupler will see only about 0.8V. <S> You may need to modify the codes so that the microcontroller will provide active low output. <A> I see 4 options for accomplishing what (I think) you're going for here: <S> Replace one of your optoisolators with a 5V drive, 170V+ load relay, such as this one . <S> The major disadvantages here being the need for a freewheel diode to protect your 'duino from inductive voltage spikes from the coil, and contact wear/welding on the relay itself (and relay noise, if that's a significant concern). <S> The down-side to this option is that you lose your isolation (arduino gnd must be tied to signal gnd & failure of the MOSFET has a small chance of exposing your arduino/circuit to 170V). <S> Add a 2nd power supply, such as an A23 battery to drive a "standard" 170V+ power MOSFET (like FDD7N20TM ), using the optoisolator to drive this "intermediate voltage. <S> " <S> This method maintains your full isolation for the arduino & its power supply without the i ductive problems & contact wear of an electromechanical relay, at the cost of needing to add a small battery. <S> Just replace the 4N35 with a higher voltage optoisolator & call it a day. <S> (All digi-key links are provided for convenience only and are for example parts only. <S> Feel free to source any similar part anywhere you like; my only affiliation with digikey is that I tend to buy more of my own components there out lf habit.) <A> Background Most of the older film SLR (single-lens reflex) <S> cameras had a mechanical contact for the flash trigger. <S> These generally gave a brief contact closure to trigger the flash (and <S> the more advanced cameras had the option of triggering the flash when the shutter first curtain opened or just prior to closing the second curtain). <S> This contact was suitable for any kind of flash including units with a high-voltage trigger such as yours. <S> Modern DSLRs (digital SLRs) tend to have low-voltage switches suitable for modern electronic flash units only and users need to be careful not to connect HV flash units despite the standard flash shoe mount. <S> Opto-triac simulate this circuit – Schematic created using CircuitLab <S> One potential solution to your problem is to use a triac opto-isolator. <S> Triacs have the characteristic that, once triggered, they remain on until the current through them falls below the holding threshold. <S> They seem a good fit for your application as they have a high-voltage rating. <S> Everything else in your schematic can remain the same. <S> A brief web search showed up an Instructables article with the same idea.
Replace one of your optoisolators directly with a 170V+ logic-level drive power MOSFET, like IRL640STRLPBF . Possible solution for the problem is using cascade transistor connection.
Why does such circuit not work? Assume I have a lamp and I connect one side to the + pole of one battery, and then the other side to the - pole of another battery (say two batteies of 1.5V). The remaining - and + poles are not connected. So there is a potential difference between across the lamp, but it does not turn on, why ? It seems to contradict the basic law of electricity. Note : I am a newbie in electronics/electricty. <Q> Why does such circuit not work? <S> It's not a circuit to begin with since there is no closed path around which charge can flow through the lamp. <S> Let's try a different route to see this result. <S> To make your 'circuit' an actual circuit, place a resistor between the "remaining - and + poles" as so simulate this circuit – <S> Schematic created using CircuitLab <S> Using elementary circuit laws, we can find the potential difference across the lamp to be $$V_{\mathrm{lamp1}} = <S> (6\:\mathrm{V} + <S> 6\:\mathrm{V})\cdot \frac{100}{R_1 + 100}$$ <S> So, for example, if \$R_1 <S> = 0\:\Omega\$ then $$V_{\mathrm{lamp1}} = 12\:\mathrm{V <S> } <S> \cdot \frac{100}{0 <S> + 100} = <S> 12\:\mathrm{V}$$ <S> But, as the resistance of \$R_1\$ is increased, the potential difference across the lamp must decrease. <S> For example, if \$R_1 <S> = 1\: <S> \mathrm{M\Omega}\$, then $$V_{\mathrm{lamp1} <S> } = 12\:\mathrm{V <S> } \frac{100}{1,000,000 + 100} = <S> 0.012 <S> \: <S> \mathrm{V}$$ Setting <S> \$R_1 = \infty\$ is equivalent to specifying that the <S> "- and + poles are not connected" . <S> In that case the potential difference across the lamp is $$V_{\mathrm{lamp1}} = 12\:\mathrm{V <S> } \frac{100}{\infty + 100} = <S> 0 <S> \: <S> \mathrm{V}$$ in contradiction to the claim <S> "So there is a potential difference between across the lamp" . <A> The potential difference is only between the terminals of each battery. <S> There is no "global" potential difference. <S> This is why a circuit must be complete in some manner for current to flow. <A> Since the resistance of the lamp is vanishingly small compared to the resistance between the unconnected battery terminals, the potential difference across the lamp will be vanishingly small as well, with largely all of the drop occurring across the huge resistance of the medium separating the "unconnected " terminals. <S> That resistance will be so large that it'll limit the flow of charge through the circuit to a value so low that it couldn't possibly light the lamp.
In fact, there is no potential difference across the lamp and, as stated earlier, it is for the simple reason that there is no closed path around which charge can flow through the lamp.
PIC16F1503 resets during relay switching of 1HP motor I am totally new to this stuff. I have connected a 1 HP induction motor through a relay and it is switched by a PIC16F1503. I have connected a 10 uF 25V electrolytic capacitor with VCC and GND pin, is it OK to not connect any ceramic capacitor across because it's already connected to the 10uF capacitor? <Q> Relays sometimes cannot be driven directly by the micro, as the power they require is too big compared to what the micro can output. <S> If this is the case, you will need a dedicated circuit or IC to drive it. <S> It would be helpful if you could give more information about the relay you're using and the way you're driving it. <A> ...is this OK to not connect any ceramic capacitor across <S> because it's already connected 10uf capacitor!? <S> No, you need some smaller ceramic capacitors. <S> A couple of 100nF ceramic caps would be a good place to start. <S> The electrolytic capacitors have a high ESR and are near-useless for decoupling high frequency transients (which the motor generates a lot of). <A> To prevent the Voltage is dropped down during short period because the starting current is high, I think it needs more capacitance or powerful PSU. <S> In anotrer case it may to do that MCU will go to be reset by itself.
Sprinkle your circuit with low-ESR ceramic decoupling caps; place them physically close to active components, your MCU especially.
How quickly would a pocket blowtorch ruin PCB traces? I need to remove an obtrusive Mini USB port from a small PCB, and I initially wanted to use my Soto Pocket Torch to melt all of the leads at once and remove it. I've tried desoldering braids and solder suckers, but the component is in such a way that neither work. I don't care how hot the port gets, but I know that torching the component off might ruin the traces on the board. Does anyone know if a torch like this would desolder the part before the board suffers any damage, or does it not take long to cook the PCB? The torch I am using produces 1300C˚ of heat. EDIT: I just want to clarify the part I'm trying to remove. Here are some images of what the component looks like: <Q> Most likely, the heat of the torch flame will cause carbonization of the plastics in the motherboard & nearby components before the needed solder pools are liquified. <S> Once you carbonize (or carburize, if you prefer) an insulating layer, it becomes a lossy conducting layer (i.e. it's ruined), so I would highly recommend against ever intentionally exposing a PCB to direct flame. <S> An alternative that will likely work for you <S> Most of those are meant for soldering large chunks of NOT thermally sensitive stuff, sl they produce enough heat fast enough to melt all the solder pools attached to your usb port's housing...and do it without exposing your pcb & other components to fire. <A> I've tried it and it doesn't work too well. <S> Solder melts at ~325C (dependent on the solder). <S> A butane flame is +2000C. <S> You can back off the flame and produce a large spot that is in the range you want, but unless you have a thermal imager, this would prove difficult. <S> Get a cheap hot air gun on ebay for 40$ <A> If you heat the metal frame of the connector with the flame in such a way that the hot gases blow away from the PCB, the temperature of the connector should eventually be sufficient for desoldering. <S> A better alternative would be to use a hot air gun for desoldering. <S> You can cover the area surrounding the connector with polyimide (kapton) tape in order to minimize the heat affected area and to prevent everything from flying off in the airflow. <S> This task would be best suited for a hot air rework station.
If the flame touches the PCB or components they will be rapidly damaged due to the much lower thermal conductivity of plastics, so avoid it from happening at all costs. : Get one of the "pistol grip" type "soldering guns" (like this one or this one ) that have the >- shaped wire tip/heating element. In a pinch I have also tried other flame sources like matches, with not much luck (this was a long time ago, quit judging me).
Why can't we read voltage between just one pin of a loaded capacitor and any ground? I know that we can only read voltage across a capacitor if it's two pins are connected to the voltmeter or can't talk about any potential difference between totally different systems if they do not have a common ground. But as I think about it more, I started getting confused and can not understand the physical reason why this is the case. Let me introduce a case; simulate this circuit – Schematic created using CircuitLab First I charge my capacitor via battery. Numbers are trivial. Then; simulate this circuit Then I measured the voltage to be sure that its charged. After that; simulate this circuit Than I connected the negative probe to any universal ground(earth, a big copper plate etc.) and from common knowledge I know that I could not measure any voltages but I could not understand the physical reason of why. If voltmeter is measuring the current across its probes than why the charge carriers do not flow from the capacitor plates to ground? If voltmeter measures the electric field like an electroscope than why a plate with charge carriers and a ground do not create an electric field? Thanks for reading and sorry for trivial question. <Q> Than I connected the negative probe to any universal ground(earth, a big copper plate etc.) <S> and from common knowledge I know that I could not measure any voltages <S> but I could not understand the physical reason of why. <S> The voltmeter can only measure the voltage (potential difference) between its two terminals. <S> It doesn't know anything about any other nodes in the circuit. <S> The capacitor only controls the voltage between its two terminals. <S> It doesn't influence anything about any other nodes in the circuit. <S> Say you charge the capacitor to 9 V. <S> Then you disconnect the capacitor from ground. <S> The two terminals of the capacitor are still 9 V apart, but there's nothing keeping either one at the the ground potential. <S> Due to a few stray electrons blowing on or off the isolated capacitor due to wind, etc., the potential relative to ground could drift by 10's or 100's of volts. <S> Then, when you connect the voltmeter like in your 3rd diagram, you provide a path for a small leakage from the top terminal of the capacitor to ground. <S> Now that terminal will end up very close to the ground potential, and the other terminal will end up at -9 V, because the capacitor is still (assuming no leakage through the dielectric) maintaining a 9 V difference between its two terminals. <A> Actually, I think the worst part of your question is your apology for asking a "trivial question. <S> " Your question is very valid, and quite non-trivial. <S> While, according to its naming convention, a "voltmeter" should show a static charge of carriers when connected to either terminal of a charged capacitor (and the charge between them, by subtracting one from the other), your multimeter is actually not a voltmeter at all. <S> When production multimeters are set to measure "voltage," they are internally configured to present a high (but nowhere near infinite) impedance to the circuit under test. <S> Then, the meter measures the current across its internal load in order to estimate the open-circuit voltage of the circuit (the argument can be made that it measures voltage across the load, but the result is the same, since current/voltage across any real, non-infinite impedance are inherently linked). <S> Because there's no complete path for current in your circuit, the meter only "sees" a transient passage of electrons across the load, then the static potentials are balanced across the meter & no more current flows, thus no voltage is registered. <S> If a "voltmeter" were to actually measure "true" open-circuit voltage (like an electroscope), then your circuit would work just fine & the grounded electrode of your meter would be functioning as "reference ground," rather than its current function as "broken circuit." <A> Voltage is the potential difference between two points in circuit. <S> If you only connect it to one side of the capacitor and the other side is disconnected then there is no circuit. <S> It doesn't matter what kind of 'ground' the negative side of voltmeter is connected to (could be the Earth, a copper plate, a short wire, nothing). <S> This can be proved by looking at a real world setup in which there is a path to the voltmeter's negative terminal. <S> Between any adjacent components there will always be some capacitance (unless they are separated by an infinite distance), so a practical circuit actually looks like this:- simulate this circuit – <S> Schematic created using CircuitLab <S> What happens in this circuit? <S> Since we now have a complete circuit, current can flow from C1 down through the voltmeter across the ground and back up into C2, charging it up. <S> C2 is a million times smaller than C1 so only a tiny amount of charge needs to move for the voltages on C1 and C2 to equalize. <S> Therefore C1 will lose a tiny bit of voltage, while C2 will charge up to the same (slightly reduced) voltage. <S> Current will flow through the voltmeter as C2 charges up, so it will show a momentary deflection. <S> However once charged the voltage on C2 is positive at the ground end, so from the voltmeter's point of view the voltages on C1 and C2 cancel out and it reads zero volts. <S> Now imagine that C1 is moved further from the ground plane so that C2 is smaller and takes less charge. <S> The initial voltmeter deflection also reduces. <S> In a theoretical circuit with no capacitance to ground, the voltmeter will always read zero.
The charge on the capacitor cannot deflect the voltmeter because there is no way for it to reach the voltmeter's negative terminal.
How to protect Reset pin of MCU from ESD strikes? We have been designing ESD protection circuits for MCU exposed pins. Here comes the reset pin protection: The pin is connected with a ESD suppressor at the connector. The pin is connected in serial with a 600Imp ferrite bead and is parallel with a 0.1uF capacitor. The pin is pulled up with a 100kOhm resistor. Here comes what happened when the reset pin is striked by ESD: A. The system resets at 3.5kV contact. B. The MCU is killed at 7.5kV contact. <Q> What I think you need to do is make sure that the charge from the ESD pulse does not reach the MCU. <S> You have done all the right things to try and absorb this charge. <S> However, the charge comes from a capacitor with a very low series resistance so you would need devices with a very low series resistance as well to be able to absorb the transient of the ESD charge. <S> Also these devices need to be fast. <S> What I would try to do is to prevent the charge to flow into the MCU. <S> So I would add a resistor in series with the MCU's reset pin . <S> I would start with a 1 kohm resistor but increase it's value if needed. <S> As the reset input of your MCU will be high-input impedance the extra resistor will not influence normal performance. <S> But it will increase it's ESD handling capability ! <S> simulate this circuit – <S> Schematic created using CircuitLab <A> You need some series impedance to limit the the current that gets into the chip. <S> ESD immunity is a matter of shunting away energy and limiting the amount that gets into the sensitive bits. <S> For example, a TVS to ground followed by a series few-K resistor to limit the current. <S> The TVS might be able to clamp 100-150A to a voltage of around 10V, so your current might be limited to <10mA. <S> The uC would reset but would not come close to being damaged. <S> For higher potential currents (induced lightning or whatever) you can consider opto-isolation and spark gaps, followed by a TVS or similar. <S> Of course ESD-related resets are not necessarily related to the reset pin itself, which is rarely brought more than a few mm from the uC in good designs. <A> Why would you connect a capacitor on parallel with the ferrite bead? <S> You are inviting the ESD jolt to come right into the MCU. <S> Put the capacitor from the inside end of the ferrite bead to GND.
The MCU will also have ESD protection inside, this combined with this external series resistor and the components you already have should prevent the ESD pulse from resetting the chip.
Where can I find out what language a MCU would use? I have looked through the data sheet for, ATSAM4LC2BA-AU but found nothing regarding what language it uses, I was hoping to code C onto it, also any thing on the necessity of a boot loader? <Q> Microcontrollers don't "use" a language. <S> They execute machine instructions that are stored in binary in their program memory. <S> This is called a compiler . <S> You probably also need a linker to go with that, although compilers and linkers usually come bundled together. <S> However , you need to learn how computing machines really work first. <S> On a big machine with a operating system, it's useful to understand what a compiler, librarian, and linker do, and what the machine itself ultimately does. <S> On a small machine like a microcontroller, it's essential. <S> Without this understanding you're not going to accomplish anything with a micro. <A> Instead, it will have a particular instruction set which is most simply modelled in assembly language. <S> Higher level languages, such as C, require a compiler which is tailored to the specific target chip. <S> And generally MCU programs are stored in ROM rather than externally on tape or something else, so no boot loader is required. <A> The chip you mention uses an ARM Cortex M4 processor, so any compiler that targets that core can be used. <S> For example, GCC has front ends for C and many other languages, and a back end that supports the M4. <S> The compiler 'front end' eats high level language code, and the 'back end' squirts out object code which is linked to create a program in binary that can be programmed into the microcontroller's flash memory. <S> That particular one has up to 512k bytes of flash. <S> There are other compilers such as Keil that come with lots of support (and a commensurate price tag if you need more than some limited amount of capability). <S> A bootloader is a program used to load other programs. <S> MCUs typically have at least one way of loading object code that does not use a bootloader, since the bootloader itself has to be loaded somehow to begin with. <S> A few may have a bootloader in ROM. <S> This particular series, like many higher end micros, has a JTAG port . <S> You would typically use a JTAG interface adapter connected between your computer and the system for both writing to the flash memory and for debugging. <S> It's usually a good idea to dedicate those particular pins to the JTAG port and not try to share them with any other functions. <S> Evaluation boards may have another MCU on the board that performs a similar function. <A> To add to Olin's answer. <S> You dont normally use or at least need a bootloader for an mcu like this. <S> The mcu logic usually provides ways to reprogram the flash so you dont need a bootloader to allow the developer to interrupt the normal boot and reprogram the flash. <S> Some mcus, perhaps this one in addition to logic based solutions may also include a factory installed bootloader that allows for other interfaces (uart or usb or spi, etc) to be used to reprogram the flash. <S> If you have not mastered the toolchain (compiler, assembler, linker) though or at least have found a sandbox that has done this for you, then you are not ready to load binaries on the flash anyway, bootloader, or jtag or other.
If you want to code in C, then you have to use a program that converts the C you write into the machine instructions the micro will execute. In general, a given processor does not "use" any particular high-level language.
Why do we use two parallel capacitors in a voltage regulator circuit? C1 and C2 are parallel capacitors and their total capacitance is 1000.1 uF. I think C1 is large enough and I can remove C2 from the circuit. The result will be an open circuit. Let's assume that I can buy one capacitor that has the value of 1000.1 uF. I know I can't do it in the real world because a 1000.1uf capacitor does not exist in the shops but the question will help me understand the circuit well. Can I replace C1 and C2 with the 1000.1 uF cap? Does this 0.1 uF really matter? Is it important? <Q> Take a look at this graph: - http://electronicdesign.com/files/29/1478/figure_01.gif <S> Capacitors have a resonant frequency due to the inherent small series inductance they have. <S> The "generalized" capacitor showing the various parasitic components is shown below: - It's the L\$_{ESL}\$ that causes this series resonance. <S> For a typical 10 uF tantalum capacitor this might occur at about 1 MHz. <S> A 1000 uF electrolytic will basically look like an inductor above several tens of kHz <S> : - Notice that the 100 nF (MLCC) ceramic is good as a capacitor all the way to over 10 MHz therefore, putting two caps together gives you the best of both worlds. <S> For a 7805 this might not make much difference but on different types of regulators not having the 100 nF could turn a power supply into a power oscillator. <A> If all capacitors were ideal your 1000.1 uF <S> capacitor idea would work. <S> They're not ideal though and real capacitors have non-ideal behaviour. <S> C1 is there to hold the voltage up between pulses from the rectifier. <S> It needs to be large in value and electrolytic capacitors are the most practical solution to this. <S> Unfortunately they can have some internal resistance and, worse, some inductance which reduces their ability to react at high frequencies. <S> C2 is typically 0.1 uF and will usually be mylar type or similar. <S> These have very little inductance and work really well at high frequencies, shunting the noise to ground. <S> C3 and 4 have a similar relationship. <S> Many novices try to skimp on the capacitors only to find that the voltage regulators go unstable and the output voltage starts to oscillate. <S> Obey the datasheet recommendations! <A> So the two capacitors have two different "jobs" to do and can not be replaced by one with the same capacitance. <A> 0.1uF is low ESR, for stability, ensure non oscillation under certain 'bad' load conditions. <S> Higher cap are electrolytic with interval coiled structure with higher ESR
In short: "high" capacitors (like the 1000 µF) are used to smoothen the voltage signal to a straight DC voltage, "low" capacitors (like the 0.1 µF) are used to suppress interference voltages.
Understanding optocoupler performance with collecor or emitter resistors When using an optocoupler, is there a difference performance'wise whether using a collector or an emitter resistor? When using a normal transistor the base to emitter voltage gain is approximately 1, whereas base to collector gain is much larger. With an optocoupler however, base is not (necessarily) at a definded absolute voltage. For this question I am disregarding optocouplers that have the base available on one of the pins, or at least having an external bias. So apart from inverting properties, are these identical (R1, Q1 vs. R2, Q2) or is there a difference (eg. in frequency response [parasitic capacity], different rise/fall times, gain, ...). And when using both collector and emitter resistors (R3C, R3E, Q3), will the signal be symmetric or should I account for base current like in a regular BJT circuit (as in \$\frac{\beta}{\beta+1} = \frac{I_C}{I_E}\$). In other words, where does the base current go in this closed circuit? simulate this circuit – Schematic created using CircuitLab <Q> Excellent question - I remember struggling with this a while ago. <S> For a phototransistor, the key parameter is the current transfer ratio which is analogous to hfe (and shares many key parameters, such as variation vs. <S> If). <S> The CTR is after all, I(out) <S> / I(in) . <S> In general, a phototransistor can be viewed as a photodiode where the output current is fed to the base of an ordinary transistor . <S> I most definitely agree that there is no current into or out of the base in the normal sense for the device as drawn. <S> In the above cases, add a current source on the bases of the transistors where the current is proportional to incident energy and you will have quite an accurate representation of what is going on. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Here is the representation I normally use. <S> The base emitter biasing is generated by incident light generating electron hole pairs which diffuse into the silicon; the light needs to be of sufficient energy, of course. <S> Where does that current go? <S> When the minority carriers reach the junction, they are swept across by the electric field set up from Collector Emitter bias. <S> The device acts as a (poor grade) current source, just as an ordinary bipolar device does, and there is an excellent description of just what is going on available. <S> For the above reasons, you can analyse a phototransistor just the same as a bipolar device provided you take into account the different parameters; a limitation is that you need to use currents (as you note, there is no specific voltage associated with the base). <S> Note <S> : When looking at this sort of thing, I view it as an energy transfer function as that makes things a bit clearer - i.e. some energy went in and therefore that energy must appear somewhere else. <A> They are two terminal devices so the performance is mirrored for common emitter vs. common collector configuration. <S> Common emitter has slow rise time and fast fall time. <S> Common collector has fast rise time and slow fall time. <A> where does the base current go in this closed circuit? <S> Photoelectric current current flows from the Collector to the Base. <S> The equivalent circuit looks like this:- simulate this circuit – <S> Schematic created using CircuitLab
If the Base is left open then it also flows into the Emitter, so Collector and Emitter currents are identical.
Programming a JN5168-001-Mxx with a USBtinyISP? I'm researching different ZigBee implementations for a mesh network application. I like NXP's JN5168-001-Mxx because it is cheaper than XBee (pricing for 200 units) and has a powerful uC. Mesh Bee seems to be an open source alternative to the JN5168-001-Mxx, but isn't competitive on price. The biggest downside to the JN5168-001-Mxx is a lack of community support. I can't find a specific SPI programmer on NXP's website to use with the JN5168-001-Mxx. My USBtinyISP is (supposedly) a general-purpose SPI programmer. Would a USBtinyISP work with the Jennic JN51xx flash programmer ( JN-UG-3007 )? If not, what would? <Q> Maybe I am late to answer this, but thought to share the ideas for others who may come across this question. <S> You can use FTDI FT232RL or any such USB to TTL converter, connect Rx/Tx of one with Tx/Rx of other, pull down pin 3 (SPIMISO), then pull down Reset (Pin 22), then release reset and then release SPIMISO. <S> This sequence would put JN516x in programming mode. <S> Starting the flash process then should transfer the code to your device. <S> You can write a program to control the serial port of your computer, using which you can control various additional pins of FTDI to create the above mentioned sequence to put JN5168 in programming mode. <S> For example, you can make use of CTS, RTS, DTR or other pins <S> Hope <S> it helps <A> If SPIMISO is low on device reset, the JN5168 enters programming mode. <S> [1] http://www.seeedstudio.com/wiki/Mesh_Bee <A> here is a JennicModuleProgrammer source code will assist to upload firmware to the NXP JN516X using FTDI USB-Serial cable. <S> Also there is a Contiki-OS source code and tool chain assistance for the NXP JN516X chipsets (JN5168/JN5169) <A> I am currently using an FTDI USB to TTL converter to program the Mesh Bee, works great. <S> Simply connect it to the UART0 port and pull down SPIMISO like Simon said. <S> Be careful with the voltage, the Mesh Bee only accepts 3.3v.
As I understood the JN5168 is programmable with any SPI flash programmer (I plan to use the UartsBee v5, as described for the MeshBee [1]).
Possible to drive a ULN2003 driver with a LM3914 Bar graph display I would like to drive up to 10 LED's of each pin of a LM3914 Bargraph IC. In my toolbox I have a ULN2003 Darlington drive which should fit the bill. The Darlington driver seven open collector darlingtonpairs with common emitters. The LM3914 does this also. My premise is to connect the output pins of the LM3914 directly to the ULN2003 (see crude illustration) Is this going to be possible? or am I going to have to invert the output of the LM3914 <Q> LM3914 has open collector output on its pins... <S> When a particular output is active the internal chip transistors will pull the output pin towards ground. <S> But when the output is not active then it remains open and floating. <S> Thus, when connecting the LM3914's output to the input of other ICs which expecting logical high or low signals, a pull up resistors might be a good idea to avoid random states. <S> If active high outputs are required from ULN2003, inverters are necessary between LM3914 and ULN2003. <S> Use either hex CMOS inverters or built simple pnp inverter for every channels. <S> If pnps are used, connect the output of LM3914 to the transistor base, link the emitter to Vcc through 10K resistor. <S> Connect the collector of the PNP to ULN2003. <S> In this case, separate pull up resistors will not be required. <A> To do this without lots of inverters, use one opamp to invert your input voltage and add the full-scale voltage of the LM3914, and start your outputs from the other end. <S> Then "0V" input corresponds to "full scale, all on" on the LM3914, which holds all the ULN2003 outputs off. <S> As the input voltage increases, the 3914 outputs start turning off and the pull-up resistors (you DO need those) turn your ULN2003 outputs on... <S> There may be tricks you can play with the 3914 reference inputs to simplify the analog input "logic", I haven't followed your links and studied it. <A> An inverting NPN collector-follower. <S> Resistor values may vary. <S> When the LM3914 is on , Q1 is pulled low, which pulls Q2 low through R2. <S> ULN input is pulled high through R3, as Q2 is off. <S> When the LM3914 is off , Q1 is off, and Q2 is pulled on through R1, turning it on. <S> ULN is pulled low through Q2. <S> simulate this circuit – <S> Schematic created using CircuitLab
Use a cheap 10x resistor network of 10Kilo ohm to pull up all outputs of LM3914 when connecting it to logic gates. The 3914 doesn't care if you play such tricks on it.
In a brushed DC motor, why do the brushes have springs? In trying to understand brushed DC motors, this post has been very helpful but I still have some fundamental questions about the brush mechanism. For instance, what is the purpose of the spring? ( source ) And since a brush looks mostly like a spring, how did the name "brush" even come about? <Q> The purpose of the brushes is to make electrical contact with a rotating conductor (the commutator). <S> Originally, these were bundles of wire that would be dragged across the commutator. <S> At any time, at least a few strands of the wire would be making contact. <S> These bundles, of course, look like "brushes". <S> Things have improved though, and now we use solid, low-friction, conductive materials for the brushes. <S> It is common to use assorted types of graphite. <S> These brushes must be held against the rotating commutator, and the material eventually wears away and must be replaced. <S> The spring pushes the brush against the commutator, providing good electrical contact as the material slowly wears away. <S> Here's a representative picture dredged from Google. <S> The dark material is the actual "brush", made of conductive graphite. <S> Please note that, unlike the brushes in your link, these don't have a wire connecting to the graphite. <S> This is because the springs themselves are conductive! <S> An additional wire can be used in higher-current applications. <A> The brushes have springs to keep a consistent force between the brushes and the commutator conductors as the brushes wear down. <S> There has to be enough force to make good electrical contact, but not enough to cause excessive wear. <A> What is the purpose of the spring? <S> Brushed-DC motors & dynamos generate outward force against the brushes from the motion of the commutator segments. <S> Additionally, the carbon brush you linked is designed as a "sacrificial element" that wears down with use. <S> How did the name "brush" even come about? <S> As with many other seeming misnomers, it's a retained reference from a now obsolete implimentation. <S> The first sentence in this paragraph gives the historical context.
The spring functions to keep the brush in contact with the commutator segments in opposition to the outward forces exerted by the segments (both due to surface irregularities [gaps], and imperfect concentricity of the bearings/rings/segments), while also adjusting for the wearing away of the carbon.
Why there is a resistor and a capacitor in this AUX cable's diagram? I am looking at a BMW's 3.5mm AUX cable diagram, the left and right wires are connected together through a resistor and there is a capacitor on each left and right audio. and the third is a ground. so I have two questions, Why there is a capacitor on each left and right ? why we need to store energy, is it gonna delay the audio ? Why the right and left audios are connected together with a resistor? wouldn't that make my audio mono as opposed to stereo ? P.S: more detail,this is used to add AUX female socket to BMW Model E39/53's Audio system, the X13598 will be connected to the back of the audio system and the B1 (3.5mm female aux socket), will take input from phone (or ipod) through a male AUX cable . <Q> On the other hand, the amplifier in your car might expect a DC-free signal at its input. <S> The capacitors serve as protection against bad devices outputting DC. <S> Typically, the output impedance of an AUX connector is around 5 to 10 kiloohms, and the output impedance of headphone jacks is between 20 and 200 ohms. <S> Both of these are way below the 300 kiloohm resistor, so it appears electrically as if the resistor was not there if both the left and right channel are properly driven. <S> You likely never notice the small amount of stereo mixing that is caused by this resistor except with special test signals. <S> On the other hand, if a mono source is connected to only one of the input channels, and the other channel is left open, the resistor transfers the audio signal to the unconnected channel. <S> If the input impedance of the amplifier is high (several mega ohm are likely if they use a FET buffer amplifier in the input stage), the 300 kilo ohm resistor will not reduce the volume a lot. <S> The R-C combination will cause a slight frequency dependent phase shift, which is not a completely bad idea, because it makes the sound less mono-like and more appealing to the casual listener. <S> So the capacitors are a fix for bad devices that output DC on their jack, and the resistor is a fix for (bad) devices that do not output a stereo signal. <S> I can't explain why they put these fixes into the cable and not the input circuit if the car amplifier. <A> The capacitors AC-couple B1 to X13598, effectively removing any DC bias on those audio lines. <S> The resistor is probably in place for impedance matching the source. <S> To further elaborate, capacitors have a lower impedance (or reactance) at higher frequencies. <S> To AC signals, they look almost like a short circuit. <S> To DC they look almost like an open circuit. <S> It is possible that they are not using a bipolar output, and thus the capacitors could remove the DC offset, effectively making the signal bipolar. <A> BMW radio uses the resistance between Left and Right to determine whether to make aux available. <S> To ensure all devices work correctly, you should have it. <S> The effect of no resistor might be desirable to you, or not: if you remove the cable from your device (e.g. iPhone) then the radio jumps out of Aux input back to Radio (e.g. FM). <S> Drives me nuts, am going to add the resistor.
Audio signals do not have a DC components, but some devices add DC to the output, either as artifact, or because the circuit is more simple if you allow some residual DC.
Designing a factory reset switch I want to add a factory reset onto a PCB design I am working on. Basically, I want it to be a switch (or combination of) that will never be "accidentally" closed and therefore accidentally factory reset the uC. My initial thought was to add a switch you can only press with a needle that you see often in consumer products, except this PCB will not be inside an enclosure and I believe the "needle" requirement comes from designing the enclosure to have a small opening that only a needle can penetrate. My next idea is to use an 8PDT switch (or similar) where only 1 combination of the 8 switches (1 / 2*8 = 1/256 = 0.4%) will output a "1" to the factory reset. Anyone have any experience with this, or suggestions for alternate methods? Thanks. <Q> For example: 1 long press, followed by a long release, followed by a short press. <S> The long press and release could be 2 seconds each, the short press could be 0.5s. <S> Obviously you have to allow for a tolerance on these timings. <S> Or, press the button 3 times within a certain time interval, etc. <A> Ideas: Use a standard PCB button and check for status immediately after power-up reset. <S> If button is pressed then perform the factory reset. <S> Note that if the button is jammed for any reason that a factory reset will occur. <S> Use a 20 s hold-in. <S> Netgear use this approach on many of their SOHO routers. <S> With this approach you can check that the button is open on power-up and subsequently closed for 20 s. <S> The long time delay is unlikely to be reached by someone just probing about. <A> I've seen people use a relatively inaccessible push-button (paper clip or must open case to access) that is held down during power-up. <S> That ensures enough intention that it won't get accidentally triggered, IMO.
Use a standard PCB button, but require the user to press it in a sequence that would not happen accidentally.
Control a water pump with a relay board I have a water pump rated 650 W and the attached relay board. Is it ok to modify a power socket to use one relay to control the phase and one to control the neutral wire? Thank you! <Q> The relay appears to be unsuitable for controlling a 220 v motor. <S> The manufacturer's specifications list inductive load ratings only for 120 volts. <A> [Edit] <S> Another user points out this relay is unsuitable for switching 240VAC motors. <S> You only need one relay to cut the power to the pump. <S> In continental Europe, for instance, it's impossible to know which line is neutral and which wire is phase. <S> In any case, be sure to connect the earth, always. <A> You just need to open the circuit in order for the motor to turn off. <S> It doesn't matter if you accidentally swap the Phase and Neutral as long as you open the circuit. <S> The relay rating seems fine, just Go Ahead with it. <S> If you feel like, you can just have 2 Relays. <S> It will work just fine, both ways. <S> Note: Motors will usually have a peak current during startup that could go 10 times its actual rating. <S> Make sure the peak lies within the Specs of the relay. <S> Look up the specs of the motor.
But you can use two if you want to be sure there's no live voltage at the other side of the relay when you switch it off.
Is there an equation for the cutoff frequency of a chebyshev filter? I have this circuit here which is a low pass chebyshev filter, I have been asked to find it's cutoff frequency. I have used matlab to plot an output response against angular frequency graph here . But I'm unsure of how to find the cut off frequency from this, is there an equation to do this? <Q> It doesn't matter how the filter was designed. <S> Matlab is very good at finding the roots of polynomial equations. <S> If you don't need a lot of precision, just read the value off the graph. <S> Since the peak voltage response is 0.5, look for the point at which the curve crosses the 0.35 level (0.5/√2), which is just shy of 5000 rad/s. <A> The usual definition of the cut-off frequency of a (type I) <S> Chebyshev filter is shown in the figure below: <S> The common practice of defining the cutoff frequency at −3 dB is usually not applied to Chebyshev filters; instead the cutoff is taken as the point at which the gain falls to the value of the ripple for the final time. <S> Knowing the characteristics of a Chebyshev filter helps in computing the cut-off frequency (as defined above) without explicitly solving the equation \$|H(j\omega_0)|=c\$, where the constant \$c\$ is chosen according to the definition of the cut-off frequency. <S> The squared magnitude of the frequency response of an \$n^{th}\$ order type I Chebyshev filter is given by $$|H(j\omega)|^2=\frac{1}{1+\epsilon^2T^2_n(\frac{\omega}{\omega_0})}\tag{1}$$ where \$T_n(\omega)\$ is the \$n^{th}\$-order Chebyshev polynomial of the first kind , <S> \$\omega_0\$ is the cut-off frequency as defined above, and the constant \$\epsilon\$ specifies the pass band ripple, as shown in above figure. <S> You should know the exact value of \$\epsilon\$ from your design specifications, but I can estimate it from your figure: \$\epsilon\approx 0.23403\$ (note that you need to take into account that the maximum of your transfer function is \$\frac12\$ instead of \$1\$, so <S> the smallest (linear) pass <S> band value is given by \$0.5/\sqrt{1+\epsilon^2}\$). <S> In order to find \$\omega_0\$ we need to compare the expression for the actual transfer function to the one given by (1). <S> It's a basic exercise to show that the transfer function of your filter is $$H(s)=\frac{\frac12}{\frac{L^2C}{2R}s^3+LCs^2+(\frac{L}{R}+\frac{RC}{2})s+1}\tag{2}$$ <S> Knowing <S> that \$T_3(x)\$ is given by $$T_3(x)=4x^3-3x\tag{3}$$ <S> we can compare the factors of the highest power of \$\omega\$ <S> (which is \$\omega^6\$) of the denominators of (1) and of the squared magnitude of (2) for \$s= <S> j\omega\$: <S> $$\frac{16\epsilon^2}{\omega_0 <S> ^ <S> 6}=\left(\frac{L^2C}{2R}\right)^2\tag{4}$$ <S> From <S> (4) \$\omega_0\$ can be expressed as $$\omega_0=\sqrt[3]{\frac{8R\epsilon}{L^2C}}\approx 3829.7\text{ rad/s}\tag{5}$$ where I've used the approximate value of \$\epsilon\$ given above. <A> The cut-off frequency is slightly below 5 kHz: - That's the point where the output signal falls to the half power point <S> i.e. 3 dB below the input signal. <S> If you need a more accurate answer then do the math on the filter and find the half power point that way.
Once you have a specific design, you write down the equation for its frequency response, and solve for the frequency at which that response drops to half power (-3 dB), which is the definition of cutoff frequency.
Why is noise input referred? Many of the textbooks I have read analyze noise signals. We have a circuit and find the output noise RMS voltage. But there is always the "extra" step of finding the input-referred noise. Why do we need to find the input-referred noise? How does it help us any more than the output noise RMS voltage? <Q> It gives a useful frame of reference. <S> A circuit with a lower input referred noise will contribute less noise to overall system than one with a higher input referred noise. <A> Input-referred noise is the noise voltage or current that, when applied tothe input of the noiseless circuit, generates the same output noise as theactual circuit does. <S> http://www.seas.ucla.edu/brweb/teaching/215C_W2013/Noise.pdf <A> Input-referred noise is generally more useful because it relates directly to the unaltered signal. <S> You don't have to know anything about the gain and other characteristics of the amplifier <S> this signal is connected to in order to understand the noise level. <S> Put another way <S> input-referred noise is the same as having a perfect amplifier, with all the noise being originally on the signal.
Input referred noise is used to determine the noise contribution of the circuit when it is used in a system.
Meaning of zeros in transfer function Can someone please explain, provide a link or cite a book where the properties of the zeros for continuous and discrete time systems are explained? I know that the zeros are the frequencies where the numerator of a transfer function becomes zero. $$H(s) = \frac{A(s)}{B(s)}$$ But I would like to know what role the location plays in the pole-zero plot?All I can find are pole-zero plots and that basically the poles define the system stability and time response.However, what are the zeros "doing"? What happens if the zeros are in the right or left half plane?Are the zeros describing the damping or also stability? Here is a link to a pdf of MIT explaining the pole zeros. However, I am missing details about zeros. <Q> 1)zeros with positive real part give a negative phase contribution, reducing the phase margin (which is bad) thus limits the performance of the system. <S> 2)Time delay in the system can also be approximated as a zero with positive real part (see first order Pade approximation 1 ), similar effect as previous point. <S> 3)Blocking property of zeros, If you have a transfer function with a zero in the right hand plane, and an input tuned to that zero, then the output is at 0 for any time <S> t. Example: <S> Proof for blocking property of zeros: 3 <A> There are zeros that can be located in the same region as unstable poles (that is in the right-half \$s\$ -plane or outside the unit circle in the \$z\$ -plane). <S> But when zeros are out there, it doesn't cause the system to be unstable. <S> It does cause it to be non-minimum-phase, though. <S> So both zeros and poles have to be in the left half \$s\$ -plane or inside the unit circle in the \$z\$ -plane for the system to be both stable and minimum phase. <S> And a minimum-phase system can be inverted (which causes swapping of poles and zeros) and will continue to be stable. <S> That is not the case with a non-minimum-phase system. <S> If one inverts a non-minimum-phase system, the result will have poles in the unstable region and will be unstable. <A> all of the answers are correct but one subject is missing: <S> zero in the right-hand side of the s plane can cause undershoot in the time response of the system, and this can be very very dangerous in some cases. <A> Zeros are very import for the system behavior. <S> They influence the stability and the transient behavior of the system. <S> The referenced document is a good start. <S> When dealing with transfer functions it is important to understand that we are usually interested in the stability of a closed loop feedback system. <S> In order for the closed loop system to be stable, the poles have to be located in the left half plane. <S> The zeros have no importance, since the stability of a linear system is solely determined by the position of the poles. <S> When designing a closed loop system (i.e. a circuit), this is usually done by analyzing the open loop system. <S> Because for the open loop system it is easier to understand how the circuit parameters are going to influence the system behavior. <S> When closing the loop slowly by increasing the feedback while monitoring the poles, it can be seen that the poles are attracted by the zeros. <S> The poles move towards the zeros and if there are zeros in the right half plane, the tendency for the system to become unstable is higher because finally the pole will assume the position of the zero. <S> Such a system would be called a non-minimum phase system, and they are quite common.
It can be shown that the position of zeros of the open loop system are important for the stability of the closed loop system.
Op-amp inverter followed by buffer. Why? In a schematic I've been trying to understand I came across this sub-circuit: It's an op-amp inverter directly followed by a buffer. VIN comes from a DAC in a microcontroller and this circuit produces a VOUT which is negative VIN. The op-amp is supplied by positive and negative rails (not shown here). So far so good. But I don't fully see the rationale of using OA2 in this circuit. The only reason I can see is this: Without the buffer (OA2) a sudden load at VOUT would draw a current from VIN until the op-amp OA1 feedback adjusts (about 1µs). With the buffer (OA2) this is not the case anymore. Am I getting this right? Or am I missing something? <Q> You are right. <S> In most cases this is silly, adds offset voltage, and uses another part. <S> Most likely this is just someone's knee jerk reaction, or blindly following a rule of "always buffer the signal" without thinking about it too hard. <S> Not all schematics out there are the result of good design. <S> There are some subtle advantages to the second buffer-only opamp: <S> The feedback current thru R2 eats into the total output current capability of OA1. <S> In this case with R2 being 10 kΩ, this is a weak argument since the feedback current is so small relative to the capability of most opamps. <S> Sometimes a circuit like this happens because R2 was much lower before, and the second opamp wasn't removed after a design change that raised R2. <S> OA2 protects the input signal from abuse of the output signal. <S> Vin sees the fixed impedance of R1 only as long as OA1 is acting in closed loop operation. <S> If something loads Vout so that OA1 can't drive it to the desired voltage, then the negative input of OA1 is no longer at 0 V, and the Thevenin equivalent that Vin is driving changes. <S> In this circuit, the output of OA2 can be abused without affecting the output of OA1, which in turn won't affect Vin, maybe . <S> The reason I say "maybe" is that some opamps have back to back diodes between their inputs. <S> I didn't look up your opamp, so I don't know whether that is the case here. <S> If so, then abuse of Vout will get back to the positive input of OA2, which will get back to Vin. <S> This is again a weak argument since loading a opamp output to the point where it can't drive to the desired voltage <S> is generally running the opamp out of spec. <A> It doesn't have much effect on the performance, except to make it somewhat slower because there are two poles in the transfer function. <S> Chances are the designer only needed the one op-amp in the dual and chose to do something benign with the remaining amplifier to keep it out of trouble. <S> This is a common situation with LM324 quad and LM358 dual amplifiers. <S> There is no common inexpensive equivalent of the LM358 that has a single amplifier- <S> any other parts tend to be more expensive and/or may be limited in some way (such as having lower maximum supply voltage) <S> so if an LM358 is good enough then you may as well use it and waste the 2nd amplifier. <A> Since OA1 is part of a feedback network, some of it's output is used already (lost through R2 and R1.) <S> Which means OA1 has less drive capability. <S> So if you were to connect OA1 to some other part of a circuit, unintended things could happen. <S> OA2 simply "follows" or "buffers" the output of OA1, and it has zero output loading, so has full drive capability. <S> Also, buffers matter in terms of delay. <S> In both digital and analog circuit design, high-speed signals can be significantly delayed by circuit elements. <S> Sometimes, multiple buffers are used - seemingly with no purpose - except to introduce a delay. <S> This is usually done so that two signals "meet up again" in the time domain. <A> When the power is on, there is supposed to be little difference as the other posters remarked. <S> When the power is switched off however, the second variant is less likely to have the output bleed back into the input and will probably make the input load independent from the output connections. <S> For some applications (audio?), that can be a desirable property. <S> Whether this is indeed the case here depends on the internal circuitry of the opamp in question. <S> Since a specific type is given, this may indeed have been part of the design. <A> In the schematic you have drawn, as others have answered, there isn't so much benefit from this layout. <S> If however there are two different model op-amps and the resistor values are different, then there can be good reasons for using such a layout. <S> I created a similar circuit, which needed to amplify a relatively high frequency signal, and then drive the output in to a 50 ohm load. <S> These two functions require op-amps with different characteristics. <S> For the first op-amp, it needs to have a higher bandwidth to allow it to amplify a high frequency without any loss of gain at high frequencies. <S> For the second op-amp, it had to have a higher rated output current to be able to drive a 50 ohm load at the maximum output voltage, but didn't need such a high bandwidth as it only had a gain of 1.
The "buffer" is just there to, as the name implies, "buffer" the output. OA2 has all of its current capability available to drive the output. This "buffering" is commonly seen and used, and makes the operation of the circuit more robust and reliable.
Resistor values to produce 8 voltage windows into analog pin I have 8 Arduino Nanos (at 5V) plugged into a PCB and I would like each to have an ID so that each Nano can choose an I2C address without any clashes, without having to program each differently first, and using only one analog pin. Then, if I swap any of the Nanos around, they will pick up an address based on the resistor value of their location. For this I think I only need each Nano to see the output of a voltage divider for this purpose, but the threads I've read don't recommend resistor values to use, so I'm not sure what will give accurate readings while limiting current leakage. Can anyone recommend some values to use? The readings don't need to happen quickly if that allows higher resistor values to be used. thanks,Danny <Q> You can use 5 volts as one voltage, then use GND (0 volts) for another (assuming the 5 volt power supply is regulated). <S> Then you only need 6 more voltages equally spaced. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The Output volts is calculated by : \$Vi*R2/(R2+R1)\$ <S> Where Vi is the Regulated 5 v. <S> You can even use the circuit simulator to measure the Output volts. <S> Change R2 to obtain other values. <S> I selected R1 as 100K. <S> You could use a R1 of even 1 Meg ohms to reduce current consumed. <S> I added capacitor C1 so that your Analog to Digital converter won't drag down the Output volts. <S> The capacitor holds the voltage constant <S> while the ADC does it's sample and hold. <S> (EDIT 2 : C1 changed from 10nf to 100nf) <S> EDIT 1 : added note on sample and hold, per comment by RobbhercKV5ROB Taken from Data sheet for the processor : <S> The ADC contains a Sample and Hold circuit which ensures that the input voltage to the ADC is held at a constant level during conversion. <S> EDIT 2 : <S> I have changed the capacitor C2 from 10nf to 100nf to account for the following from the ADC specifications : <S> The ADC is optimized for analog signals with an output impedance of approximately 10 k or less. <S> If such a source is used, the sampling time will be negligible. <S> If a source with higher impedance is used, the sampling time will depend on how long time the source needs to charge the S/H capacitor, with can vary widely. <S> The user is recommended to only use low impedance sources with slowly varying signals, since this minimizes the required charge transfer to the S/H capacitor. <A> Here is an example of Marla's solution with a simple spreadsheet to show the input source impedance Rsrc as seen by the ADC using 4.7K series resistors. <S> I would suggest not doing that though- using 1% resistors, the worst case error at the input is only about + <S> /-5 <S> counts <S> , so there is lots of margin. <S> But if each supply voltage (and therefore ADC reference) varies by a few percent, the margin will be reduced significantly. <S> It would be better to use a divider for each Ardino from each individual supply so that the ratiometric measurement would cancel out differences in the regulated supply voltage. <S> Suitable values might be as follows: <S> The resistors are standard E98 values except for 66.03 which is made from 95.3K || 215K, so a total of 7 different resistor values would be required (including 10.0K), and a total of 14 resistors). <S> If you can use the same reference on each Arduino and feed the divider from the reference, then a simple divider can be used without fear. <S> I'll leave that to those more familiar with the particular Arduino- <S> but I think the underlying AVR chip supports using an external reference. <A> An example is: If you chose to use 1% resistors, then worst case low resistance would be 99% of the sum of the resistances, and worst case high would be 101% of the sum off the resistances. <S> As far as loading goes, I think worst case is with the ADC (or its S&H) on tap V4 or V5, but don't take my word on that; work it out for yourself. <S> :) <S> Once you've got all that done, all you'll need to do to implement your addressing scheme is to determine the width of the detection window (centered on the tap voltages) you want, and have your software determine where the Nano is, based on its falling between those limits. <A> I envision a voltage-divider bus, something like this: If you set all of your Nanos' (shown highly simplified, for schematic clarity) <S> analog pins (the ones attached to this bus) to sample as infrequently as reasonable for your application (I'm guessing 10Hz, if supported, should be adequate), then the capacitors in this bus should be able to sufficiently buffer your inputs' sample-and hold demand spikes. <S> The lower the tolerance <S> (+/-1% should work well) on your resistors, the more accurately your ADCs can define their bus position ('ideal' values in this circuit would be +5V, +4.375V, +3.75V, +3.125V, +2.5V, +1.825V, <S> +1.25V <S> & +0.625V at each of the 8 outputs). <S> Experimentation can tell you the highest resistance value you can use for your resistors, while allowing your ADCs to still accurately resolve their in-bus position with your specific components & software configuration(s).
If you use a multi-tapped voltage divider, then by choosing the tolerances of the supply and the resistors tight enough and making the resistances of the resistors low enough that the ADC load becomes negligible, you can generate output voltages of arbitrary prcision.
Does Q-Factor matter for low pass and high pass filters? For band pass and band stop filters, Q tells how sharp the curve is at the centre frequency. I guess in this way it is required to roll-off. However, low pass and high pass filters do not have centre frequency. So, what meaning does Q factor have for them? Does it matter of it is less than 0.5 or more? Looking at picture of frequency response, it seems that the high Q filter has a type of hump as it approaches the cut off frequency. Isn't this a bad thing since ripple in pass band is not desired. <Q> Here's a picture (I drag out now and then) that explains the effect of Q on a 2nd order low pass filter: - The top three pictures show you the effect of varying the Q-factor. <S> Q-factor can also be reduced to make a maximally flat pass-band (aka a butterworth filter). <S> The picture goes on to explain where the pole zero diagram comes from and how you can relate natural resonant frequency (\$\omega_n\$) with zeta (\$\zeta\$). <S> For your reference, zeta = 1/2Q. <S> You will also find that the shape of the curve reverses (with a hump) for 2nd order high pass filters: - The high-pass filter picture came from here . <S> However, low pass and high pass filters do not have centre frequency. <S> They have the equivalent of a centre frequency known as the natural resonant frequency and if you think about a series L and C making a notch filter: <S> - This becomes a 2nd order high pass filter if the output is taken from the junction of the capacitor and inductor. <S> Also if L and C swap places, it's still a notch filter but now if you take the output from across C it becomes a 2nd order low pass filter. <S> Same resonant frequency and Q formulas all apply. <A> Even with theoretically perfect components, so infinite Q, you can design a lowpass filter that has a flat passband, or a bumpy passband, or a round-shouldered passband, so high Q doesn't equate to ripples. <S> Having designed the filter shape, it can acquire or lose humps if the components you build it with don't have exactly the design values, or if the terminations it's working between don't have the design values. <S> Q matters. <S> The steeper the transition band, the higher the Q your components must have. <S> A common filter design technique is to ignore the fact that all the design tables and simple design programs assume perfect components, and then build it with components with a finite Q. <S> The result will be a filter that is more round-shouldered at the edge of the passband than you expected. <S> With a high enough Q, the effect will be small enough to be ignored. <S> If a filter has to work with such a low Q that the simple approach doesn't work, then there are tables and programs that take account of the finite Q, but this restricts the steepness of the filter response that can be designed. <S> Ripple in the passband isn't necessarily the worst problem that a filter can have. <S> There is a tradeoff between the number of components, the passband flatness, and the transition band steepness. <S> By accepting a little passband ripple, one can get a lot more steepness, a trade that's usually (but not always, it depends on the application) worth making. <A> For second-order lowpass and highpass filters it <S> the Q-factor that determines the filter approximation (Butterworth, Chebyshev, Cauer, Bessel,...). <S> Hence, it is a very important parameter (form of the transfer function in the region between passband and stopband).. <S> For higher-order filters (series of second-order sections) it is very important to use the correct Q-factors which are available as tabulated values. <S> Definition: <S> Q-factors are defined using the pole location in the complex s-plane; therefore, they are also called Qp ("Pole Q"): <S> Qp= <S> wp/2|sigma| with sigma= <S> real part of the pole and wp= <S> Magnitude of the pointer from the origin to the pole. <S> The same definition applies to a second-order bandpass. <S> However, in this case we have the equality <S> Qp=Q (center frequency/bandwidth). <S> Examples : <S> 2nd-order <S> Butterworth: Qp=0.7071 2nd-order Chebyshev (ripple 1 dB) <S> : Qp=0.9565 2nd-order Thomson-Bessel: Qp=0.5773 4th-order <S> Butterworth: Strage 1: Qp=0.5412; stage 2: Qp=1.3065
If you want to design a filter with a steep transition band, there will be a minimum Q that you need to use.
Minimum energy required for LED blink to be visible I have a 5V schematic with 15n capacitor and I wondering if I will be able to blink an LED, powered from this capacitor. My calculations is that the capacitor will have about 0.2 uJ of energy. This I could convert to approximately 100 uSecs of 300 uA current. So the questions are: What is the minimum energy (energy, not power, as I need to have a visible blink, not constant light) needed to make LED blink visible in ordinary office circumstances (OK, I will cover LED with my palm to avoid direct light exposure :-D)? Is it important how I will distribute this energy in time? Is there any color difference (maybe there are any human eye response difference)? <Q> So, friends, I did the experiment. <S> The setup was two 5mm LEDs (I'm not sure what type exactly, but most probably they have 60 degs of light distribution and 40 mcd of maximum light - still didn't get the way they measure this intensity): red with 330 Ohm series resistor and green with 160. <S> Both with 5V supply and AVR microcontroller. <S> With this setup I was able to see the blink as short as 1 us for green and 2 us for red LED. <S> I should point out that I was in well lit room <S> but I put my palms around the LEDs to make a 3 inches deep well around LEDs. <S> I looked directly on the LEDs and I was expecting the blink. <S> So this light is definitely not enough to notice the blink if you are not expecting one. <S> The current can be estimated as 3.8 Volts / <S> 330 <S> Ohms = <S> 11,5 <S> mA for red and 23 mA for green. <S> So the electrical power is 11,5 mAmps <S> * 1,2 Volts = <S> 14 <S> mW for red and <S> 28 mW for green. <S> Sequentially the blink electrical energy was as low as 28 <S> nJ <S> (nano Joules !!!) <S> in both cases. <S> Which is about ten times more than I expect to spend on a blink! <S> I test this on my wife and my 7-yo daughter. <S> Same thing. <S> Regarding the energy distribution versus time: <S> Unfortunately I wasn't able to change resistors <S> so I made just one thing: I put the LED to a constant light mode with 1% PWM. <S> And I did not notice any difference if I change the frequency (1 us blink each 100 <S> us is equally lit as 100 us blink each 10 ms). <S> This is not exactly what I need <S> but it looks like it's not a big deal how I will distribute the power in time. <S> Regarding the sensitivity of the different areas of an eye : I was able to see the blink only if I look exactly on the LEDs. <S> If I shift the eye sight axis a little bit - I wasn't able to see anything. <S> The same thing I noticed with constant lighting. <A> The energy in a photon can be calculated by: $$ E = {hc \over \lambda }$$ <S> Where: <S> h is the Planck constant, approximately 6.6×10 −34 <S> J⋅s, <S> c is the speed of light, about 3×10 8 m/s, and λ is the wavelength of the photon. <S> The human eye's rod cells are most sensitive at a wavelength of 510 nm . <S> So the energy of those photons is about 3.9×10 −19 joules per photon. <S> Multiply that by the about 100 photons required for detection by a human eye, and you get 3.9×10 −17 joules. <S> By the law of conservation of energy, you will need at a minimum this much electrical energy to make anything visible. <S> Of course LEDs aren't 100% efficient. <S> Not all colors have the same luminous efficacy , so it may be that the most efficient LED for making human-visible light isn't necessarily at the wavelength where the eye is most sensitive. <S> I'll leave that research as an exercise, and let's just say an LED has a luminous efficacy of 25%. <S> That increases the energy required by a factor of four, to: 1.6×10 −16 joules <S> That is, by my rough estimation, the absolute minimum energy required to register a visual response with an LED in a human. <S> You have orders of magnitude more energy stored in your capacitor, so under ideal conditions, it's likely you could register a visual response even after accounting for inefficiencies in getting the power from the capacitor into the LED. <S> Of course in practice the room won't be perfectly dark, the viewer won't be ideally acclimated, and the LED's light won't be focused to a tiny spot. <S> So you may require more energy. <S> Perhaps much more. <A> For some good hints about how to flash an LED on low power, look up the now-obsolete LM3909 LED flasher chip . <S> Note how it "stacks" the voltage from the cap with the voltage from a 1.5V cell to get enough forward voltage for the LED. <S> One normally used a capacitor in the range of tens of µF (not tens of nF) with this chip to produces a very visible flash on a LED of only moderate efficiency. <S> I would estimate that this supplied about 50 µJ per flash, so you're probably an order of magnitude or two short of where you need to be. <A> You might be able to get a visible (in office lighting, with some attention paid to optics to increase contrast) blink with 100ms of 30uA current. <S> There is no particular point in reducing the blink duration below about 100ms- usually LED efficiency won't be better at higher current and the eye will see the same energy as about the same brightness. <S> A 100us pulse will appear about as bright as a 100ms pulse with 1/1000 the current, so more like 300nA-equivalant. <S> That might be visible with a good LED, dark-adjusted eyes and in a dark room.
Wikipedia suggests something on the order of 100 photons to achieve a visible response in the most ideal conditions.
STM32F407G-DISC1 not working if not connected to pc I have a Discovery board DISC1 version (february 2016 revision). I had programmed a Discovery board before with no issues. The ones that I more recently bought, the DISC1 version, have a strange issue: after I flash the program, if I connect them to the PC via USB (CN1), everything works fine, but if I power it up with external +5V or with the CN1 (but not from an USB port), the LD1 led start blinking, the the LD2 red led turns on, but nothing else happens. The rest of the board is off. I stress that this never happened with the previous revision of the same board, i always powered it up via external +5V and it always worked correctly. Does anyone have a clue why this is happening? <Q> I experienced the same problem and there are two solutions to get the behavior of the old board. <S> Use the ST-Link Upgrade Utility to flash to an older version of the ST-Link v2/1. <S> With version V2.J23.M9 <S> it's ok. <S> But you lose the compatibility to mbed. <S> Open the solder bridge SB19. <S> The new board has this bridge closed, resulting in a low BOOT1-pin. <S> When the bridge is opened the BOOT1 is pulled to Vdd. <S> If you do not want open the solder bridge, you can use an external jumper wire from Vdd to PB2. <A> I had the same question with V2J25 firmware. <S> And I found if I upgrade to V2J27M15 (newest version 2016/09/16), it is solved. <A> I experienced the same problem. <S> Here my solution, although not 100% satisfying: Set the Jumper SB10 on the bottom of the board. <S> A solder blob will do it. <S> SB10 forces the ST-link mcu in reset state. <S> Warning: Since I only program the STM32F4 using the usb dfu programmer, i don't need the ST-Link/V2 part for programming. <S> Don't set the Jumper, if you need the ST-Link! <S> It will not work afterwards! <A> Just upgrade the firmware and it works. <S> V2-J28-M18 (February 2017) Jan F. Boot1 to PB2 proposal does not work. <S> Neither does desoldering SB19. <S> How to upgrade:If you use ST-Link utility, make sure you have the latest version, in the menu: <S> ST-LINK-> firmware update connect. <S> YOU CANNOT CHOOSE the firmware version! <S> So, if the version is a different version, download the latest version (it comes as a little program): <S> http://www.st.com/en/embedded-software/stsw-link007.html was going to comment on Jan F. <S> but I lack one karma point to be allowed to do that!
For some reason the ST-Link/V2 part of the new STM32F407G-Disk1 board behaves different than the old version STM32F4 Discovery and prevents the STM32F4 mcu to start properly.
How does a magnetic ballast (large inductor) stabilize a negative resistive circuit (such as a fluorescent light)? When reading about fluorescent lights, I noted many sources stating that a large inductor (the magnetic ballast) in series with a negative resistor (the light, in this case) will stabilize it by blocking large AC currents. (Sources that say this include: [ Reactive_ballasts ] or [ Fluorescent lights ]) However, the impedance of a negative resistor in series with an inductor is $$\frac1{j\omega L+R},$$ where R is negative, leading to an unstable pole. What am I missing? Are the sources wrong, or am I misreading them? If so, how does a magnetic ballast actually stabilize the light? <Q> I don't think that the fluorescent light is actually a negative resistor. <S> A fluorescent light can't just do that by itself. <S> Negative resistors are powered circuits that, when taken as a whole, appear to have negative resistance. <S> A fluorescent light just has a resistance that drops as the amount of current through it increases. <S> Its resistance is always positive. <S> The purpose of the ballast is to resist the current briefly--until the next AC phase change--in order to limit the current flowing through the light. <S> This page has a schematic for a negative resistor . <S> The more voltage you apply to the input, the more current flows from the negative resistor to the input. <S> What's actually happening is that the op amp is producing a voltage at node B that exceeds the input voltage, causing current to flow to the input. <A> Maybe the sources didn't exactly mean "negative resistance" per se. <S> It is a fact that, in certain fluorescent lamps, lamp current decreases with increase in applied voltage. <S> CFLs for example. <S> This way you may say dV/dI is negative. <S> Also, a lamp does not offer constant resistance. <S> Resistance is a function of voltage applied. <A> It isn't really stabilizing in the sense of a linear feedback network. <S> Rather, the inductor is simply limiting the current to some acceptable value. <S> The lamp is a non-linear hysteretic load. <S> When there is no arc, the impedance is very high. <S> Once the arc forms, the impedance drops to a very low value. <S> So yes, over a certain range of operation increasing the current decreases the voltage, but I don't think calling it a 'negative resistance' load is helpful here <S> -- that is a linear response / small signal analysis terminology, but we actually care about large signal behavior where a linear model is not appropriate. <S> The ballast is simply a current limiter. <S> It limits the average current to approximately Vac/(w*L) when the arc is established. <S> Without it, it would cause a destructively high current. <S> In a traditional florescent bulb it is also part of the starter circuit to generate a high voltage pulse to ignite the arc. <A> Actually the resistance of the tube drops as it warms up. <S> Kind of like a semiconductor in thermal run away. <S> As the tube warms up mercury vaporizes and lowers the resistance and with any ionized plasma conductivity goes up with temperature. <S> In an AC circuit an inductor stores energy during a portion of the cycle and releases it as the input voltage drops and changes polarity. <S> This serves to limit current through the load allowing temperatures to stabilize and preventing thermal runaway.
If something actually has negative resistance, putting a positive voltage across it causes negative current flow, in other words, (conventional) current flows from the lower voltage to the higher voltage.
Is there a risk for the HW to debug an embedded software if the JTAG integrity is bad? I had to debug a C embedded software in a noisy environment.As a result the integrity test of the JTAG connection had a failing rate between 30% and 60%. What are the risks to make JTAG accesses in such conditions?I mean: Could I burn the microcontroller? Is it possible to corrupt the non volatile memory forever? Do HW protection mechanism that protect the chip exists? (thus preventing any JTAG accesses [program update, debug session, etc.]) Is it possible that the data displayed by the debugger is wrong? The target is a TI C2000. But I would prefer general answer if possible. <Q> It is unlikely that you will damage the part, but you may not be able to successfully or usefully use the connection to debug the device. <S> You will never be certain that you are seeing a software error or a failure of your debugger, since some failure modes can result in failure of the debug interface in any case. <S> Signal integrity issues can often be resolved by reducing the JTAG clock rate. <S> This will effect the performance of software loading and large memory dumps or data watches, but for simple breakpoint and stepping debug will remain usable even at relatively low frequencies. <S> If you are already seeing 60% error, you will be no worse of reducing from 10MHz to 4MHz for example, and it may drop to zero. <S> I suggest trying 1MHz as a starting point. <S> If the performance is unsatisfactory you could try increasing it incrementally to determine the maximum error free rate, similarly if you still get errors reduce it further. <S> In general the JTAG cable should be as short as possible (<20cm as a guide) - preferably the probe manufacturers original cable without adaptors or extensions. <S> When this question was posted on SO, you added a comment that the probe was a Blackhawk USB200 - the product has an optional isolation adapter for harsh environments to prevent ground-loop issues. <S> That may solve your problem entirely. <S> Finally, are you certain that the error rate is due to noise? <S> A common enough mistake is to access the pins used for JTAG as GPIO in the software for example. <A> Adding my 2 cents. <S> Unable to comment! <S> We faced a similar issue with the same target(C2000). <S> Very often the device would get disconnected and we had to restart the IDE and flash the code again to debug. <S> We contacted TI and we got the following suggestion. <S> 1) Use an digital isolator such as <S> this between the debugger and the target. <S> 2) Reduce the JTAG cable length. <S> Typically ours is less than 5cm. <S> It worked just fine. <S> Edit:Clifford pointed out these things already. <S> Just adding my experience. <A> It can absolutely damage hardware. <S> Say your IR shift gets corrupted to a MCU or FPGA and you accidentally load EXTEST or enter an IEEE 1532 ISC instruction, and you have <S> not set safe values in the BSR cells. <S> Every BSR pin on that device will immediately assert whatever state happens to be in those cells. <S> If you have some devices running power electronics for example, and there's no external protection, you could fire both MOSFETs in a switching controller and short voltage right to GND. <S> I've seen this happen multiple times. <S> EXTEST is probably the riskiest instruction that is defined in the specification. <S> If you can't trust your JTAG setup, I'd stop and fix the problem before proceeding. <S> Even outside of hardware damage, think about all the engineering time you'll waste chasing Heisenbugs that turned out to be a bit flipping every now <S> and then in a DR shift. <S> Basically, look at your design and consider what happens if every pin on a JTAG device jumps to an unknown state (or all 0s or all 1s) -- chances are you will not be happy with the result. <S> Other things I've done in the past due to bad JTAG integrity is accidental triggering of security fuses and similar. <S> This would brick the part and require replacing it. <S> By design, JTAG has no integrity checking or error correction -- it's one of the simplest possible busses by design. <A> Further, many processors are used in ways which could cause circuit damage to the CPU or other hardware if the I/ <S> O pins were set to erroneous conditions. <S> Basically I would figure that a noisy JTAG connection could cause the system to erroneously believe that you are giving it whatever combination of commands and data would be the most dangerous. <S> If there's nothing you feed the device that would be particularly damaging, then nothing will be damaged. <S> But if it would be possible to deliberately damage the device using the JTAG, one should assume such damage could also occur accidentally unless one has taken systematic efforts to prevent it.
While direct physical damage to the hardware device would seem unlikely, if a device includes any sort of write-once configuration fuses there is a very real likelihood that a noisy JTAG connection might cause such a fuse to be erroneously set in a way that would render the chip permanently useless.
Are traditional vacuum tubes still used anywhere? Apart from very specialist audio amplifiers? <Q> Maybe still some EMP-resistant radio front ends for military purposes. <S> Magnetrons, TWTs and Klystrons for RF, including microwave ovens and industrial microwave sources. <S> Also ignitrons and hydrogen thyratrons, and, of course photomultipliers are widely used. <S> As Dave Tweed says below, solar-blind flame detectors (such as UVtron) are a current application. <A> As well as, of course, vintage ham equipment, radios, and TVs. <A> The are also used in guitar amplifiers. <S> Most audio amplifiers work under the assumption that they do not color the original source audio in any way. <S> The amplification should be transparent. <S> Guitar amplifiers however, are built specifically to color the sound and tone in their own unique ways. <S> Different types of tubes can achieve different sounds.
X-ray equipment and radar sites as they, tubes, can handle the high power demands.
How can I tune an antenna for receiving (VHF)? I have a number of books on antennas and without exception they all describe balancing antennas only for transmission. Since I do not even own any transmitters these guides are useless. I have only receivers. How do I tune an antenna for receiving? Mostly I am interested in the VHF frequencies, but also UHF sometimes. <Q> An antenna can be as simple as a piece of wire (quarter wave monopole for instance or a half wave dipole for another example): <S> - The monopole (for instance) will receive any frequency but is inherently better at producing a stronger signal at one particular frequency and, as the name suggests, this frequency is at a quarter wavelength of the RF wave to be received. <S> So if trying to receive 300 MHz (wavelength 1m) you need a quarter wave monopole that is approximately 25 cm long. <S> It will have a usable bandwidth (i.e. not require a retune) of probably 50 MHz (poor groundplane) to over 100 MHz (good groundplane). <S> At the extremes the receive signal will be noticeably weaker but not necessarily unusable. <S> Think about commercial FM <S> - it uses a monopole and tunes easily 88 to 108 MHz (20 MHz) with a centre frequency of 98 MHz. <S> This isn't true of a transmit antenna because the VSWR will be significantly worsened at extreme frequencies and power delivered to the antenna <S> might be very small and may even damage the electronics driving the antenna. <A> Not sure if you are seeking information on balancing or tuning of an antenna. <S> Balancing an antenna (or rather the transmission line) <S> (taken from <S> This post : Unbalanced Lines. <S> What happens when the currents on a transmission line are not equal? <S> In the case of a parallel transmission line, the electromagnetic fields around the conductors will not be the same and will not cancel, so radiation from the transmission line occurs. <S> Thus, balancing an antenna system is not as important on a receiver as on a transmitter. <S> Tuning (impedance matching) of antenna to transmission line, and then to the receiver impedance (which I suspect you already know) is different from balancing. <S> I simply adjust the antenna tuner to obtain maximum signal strength. <S> Which of course doesn't guarantee an impedance match, but does the best I can with the limited resources. <S> Simple Tuner <S> : <S> In your EDIT, you have focused in on "tuning". <S> Given that you might know your antenna impedance and receiver input impedance, a Smith Chart is very helpful and avoids mathematical calculations. <S> Plot the antenna impedance and receiver impedance on the chart. <S> Then there are many many ways to use reactive components to transform impedance of one to the other. <S> Smith Chart Information describes Smith Chart better than I can. <A> Sounds like you might need an absorption wavemeter (or 'dip meter'). <S> This generates a low power RF signal that is loosely coupled to the antenna (or any other tuned circuit); you then tune the wavemeter's frequency and look for a dip on the meter which indicates resonance. <A> Generally, transmitters only transmit at one frequency (or a limited bandwidth in the case of FM) and in this case it makes sense to tune the antenna so as to maximize power output. <S> Receivers, on the other hand, usually operate over a fairly wide band, and their antenna cannot be tuned as precisely as transmitters. <S> In this case, the antenna is designed for good coupling in the receiver's band, but no more effort makes sense. <S> If a receiver is to be used at only one frequency, then by all means tune the antenna. <S> The procedure is exactly the same as for a transmitter.
I have an antenna tuner that I use.
Is it OK to PWM a current source? I want to control power RGB LEDs, for efficiency and safety they are better to be driven by a current source. I can use either linear or buck converters. I made a linear current source using LM317. I PWMed the input (Vin) and it worked fine. Is it OK if I PWM the output(using a MOSFET right before the load)? As I learned in theory if the output is left open the output voltage reaches infinity (hear maybe a maximum of input voltage), so as I turn the MOSFET on a high voltage goes to the LED momentarily and might damage it after a while. I think a decent step-down LED driver is PT4115E ( Datasheet ) it has a PWM pin. Are chips like this OK for driving LEDs that duty cycle may change several times per second? <Q> Assuming you do not want to modify your current source, and only want to modify the load: Instead of inserting the switch in series, it is more important to provide an alternate path for the current to take. <S> If I understand you correctly, you are considering the schematic on the left. <S> Replacing it with the one on the right should work better. <S> Your control signal will be inverted; when the signal is high, the load will be shorted so not drawing current. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> With the output side series switch the OP may be justifiably nervous about the instantaneous higher than rated voltage at the time of switch on. <S> These can cause a current spike outside the safe operating range every time the load is switched on though not a problem usually for a filament bulb it can be a problem for an LED and certainly for a laser diode. <S> If there is stray or unintended capacitance before the switch there may be more energy waiting to discharge repeatedly through the LED than is good for it. <S> The shunt switch to turn off the laser diode is used in sensitive applications and can be designed to drop the voltage to just below the operating voltage of the load for fast response (though even a low voltage will cause a little conduction and light with an LED, almost no laser action is achieved before the threshold current is reached). <S> The shunt method obviously does consume the full load current at all times but in some situations this can be a good thing <S> if you want your power supply to see a constant load, in a simple illumination scenario it is just wasted energy. <S> I would suggest that the sense pin of the current regulator be controlled. <S> A resistor (say 1 kOhm) to replace the sense terminal link and a open collector/drain output to pull it to ground. <S> EDIT: Here is a thread that has gone through all the same iterations as the answers to this SEE question. <S> I just found it using Google image search with the following search phrase lm317 current source switched . <S> They link to an image of a circuit that looks just like what I was suggesting. <S> It was about the third image in the thread if the direct link does not work. <S> Further EDIT: <S> There are a number of regulators with a shutdown pin that may be even better suited to the task at hand. <S> Parts such as LM2941, LT3022 and UCC281 may be worth checking out. <A> Actual current sources are not even close to ideal. <S> In real life, current sources are usually current-limited voltage sources. <S> The only time you get a really high voltage from trying to cut off a current is when an inductor is involved. <S> Your circuit's output current will be constant unless the required voltage to maintain the current is higher than \$V_{IN}\$ minus a couple volts. <S> If the load resistance is too high, the LM317 output will saturate near \$V_{IN}\$. <S> So as long as your transistor switch can handle the full \$V_{IN}\$ voltage, you should be fine.
If this pin is driven to ground the regulator output voltage will be limited to 1.2V and through the shunt resistor it is unlikely to be able to pass much current through the LED that is held way below the rated forward voltage.
Why was a capacitor called a condensor (condenser?) in the early days of electronics? I refurbish old tube type radios. I know when I was a child, my father referred to capacitors as condensors (condenser?). I see references to condenser in old manuals and parts lists. I know terminology does change, such as using Hertz rather than "cycles per second" (cps) as reference to frequency. Does the word condenser have a basis in understanding capacitance? What was condensed? There must have been reason to use the terminology. <Q> As the term has been traced (thanks to @helloworld922) back to 1782, it's worth noting <S> this is the year James Watt patented the compound steam engine, having conceived the separate condenser in 1765, and patented it and produced efficient condensing steam engines in the 1770s. <S> So the term was very much cutting edge at the time, and scientists tended to read much more widely across disciplines than we can possibly do today, so certainly Volta would have been aware of it. <S> In those days, electrical concepts were explained by analogy with fluid flow concepts, with pressure corresponding to voltage and current corresponding to ... current. <S> So, because a condenser absorbs large volumes of steam at very low pressure, it offers a good analogy for a device which can absorb a lot of charge at relatively low electrical pressure. <S> (However the analogy breaks down when you try to recover the steam : the condenser can only deliver water!) <S> Interesting, while books of a hundred years ago talk of electrical pressure (measured in volts) and electrical current (measured in amps) we have dropped the former term in favour of "voltage", it still looks odd to see "amperage" instead of the word "current", and I can't recall seeing "ohmage" in place of "resistance". <S> The "Admiralty Handbook of Wireless Telegraphy" (1925 edition) consistently uses the term "condenser" while calling its storage capacity "capacity". <S> The book introduces both the "practical unit" of the Farad, (millifarad, microfarad, and micromicrofarad, so apparently "pico" wasn't in use yet) and the "service unit" of the Jar. <S> (by 1925, "electrical pressure" has given may to"Electro <S> -Motive Force" or EMF, which is still occasionally seen in the wild today) <S> The original condensers were actually glass jars (Leyden jars), presumably of a standard size, because the book introduces the "service unit" which is the Jar, where 1 Jar = 1/900 uF. <S> (It then goes on to inconsistently use jars and farads throughout the remainder of the book!) <S> So we have consistently dropped some of the contemporary terms, kept some others, and inconsistently dropped others - "condenser" <S> is still the term in the spare parts catalog for my outboard motor while "capacitor" is seen elsewhere. <A> It seems the word comes from the latin condenseo which means to condense or to compress . <S> This does make sense because in contrast to a piece of wire, you can push charge into the cap without too much pressure (voltage). <S> It seems the charge condenses inside like propane gas does, when it is pressed into a gas bottle. <S> By the way, the german word is Kondensator , and it has a Kapazität . <A> These used the condenser along with a 'coil' (as step-up transformer) and points (mechanical switch), to generate the spark from the 6 or 12 VDC available in engine charging systems. <S> In modern cars the spark is generated electronically. <S> AFAIK <S> the mechanical systems were on their way out in the 80's, completely gone by the 90s as computer electronics took over most engine functions. <S> But you can still buy condensers+points for old cars. <A> Not that anyone cares, but "condenser" seems to have faded from use from the mid 1930s through about 1950. <S> Dubilier was using capacitor by 1940 but <S> Allied catalogs didn't switch to capacitor until around 1950.
The term condenser is still used for the capacitor in older automotive ignition systems.
How to implement a Muller C-element in a LUT4 of a FPGA? I am practicing Asynchronous circuit design, and I would like to have some simple experiments by building simple circuits using a Spartan-3 FPGA. I am wondering how one can implement a 2-input and 4-input Muller C-element into a FPGA using LUT4 primitives? Will it be hazard free? <Q> Looks like it should be possible to implement up to a 3 input C element on a Spartan 3. <S> On a Spartan 6 with 6 input LUTs, you should be able to implement 5 input C elements. <S> Now, I'm not entirely sure if the synthesizer will handle the feedback path correctly or not. <S> I would recommend synthesizing a single gate and then check the mapped schematic to ensure that it is implemented correctly. <S> If that doesn't work, then you may have to directly instantiate LUT primitives for your C elements. <S> This probably won't be so bad if you just do it once for a C element module that you can use many times. <S> The other problem could be the routing of the feedback path. <S> Not sure that could be done about that though. <S> Maybe setting maxdelay would help. <S> Maybe not. <S> You'll just have to do some experimenting. <S> Now, timing driven place and route is an entirely different animal. <S> I have no idea how that will work with an asynchronous design. <S> The result could be very sub-optimal. <A> The logical functionality is the easy part. <S> For example, to create a 2-input C-element with inputs A and B and output C, it suffices to implement a OR-AND network (you can do this with an FPGA LUT4) where C is the output of <S> a 3-input AND and each of the inputs of this AND connects to the output of a 2-input AND. <S> In the first AND, the inputs are A and B, in the second AND the inputs are A and C and the last AND has as inputs B and C. <S> The tricky part is to implement the feedback wire (C to both 2nd and 3rd ANDs) in a way that the OR output to the AND inputs forks (wire derivations to multiple places) are "isochronic", i.e. in a way that the delays in all branches of the feedback wire are within some computed bound. <S> You can find more about how to achieve this in a paper we wrote about the implementation of asynchronous controllers in FPGA (published in the ICCD conference in 2007), available at ( http://www.inf.pucrs.br/~calazans/publications/2007_ICCD_Julian.pdf ). <S> A 4-input C-element can be designed likewise, but you will need at least a LUT5 (4 inputs plus the feedback wire), or you can build it combining some 2-input C-elements. <A> There can be 2 types of C-elements, one is RS-latch based and another is generic gate based one. <S> Here is the example of generic gate based one: module Celement(reset, A1, A2, Z); input reset, A1, A2; output Z; <S> LUT4 #(.INIT(16'h00E8)) <S> LUT4_inst ( .O(Z), .I0(A1), .I1(A2), .I2(Z), .I3(reset) );
Implementing a C-element on LUT-based FPGAs can be done, although it is not very simple.
Running 3-6V motors with 7.2V I'm using the popular DIY motor: Which his operating voltage is claimed to be between 3-6V. I'm using a battery of 7.2V. I know that running the motor outside the recommended voltage range might shorten it's lifespan. I'm planning to use it for the next couple of months, about 10 hours of total usage. Will it be safe to say that the motor will still be able withstand this voltage? <Q> No. <S> Why should it? <S> You're operating outside the manufacturer's ratings so you can expect it to fail. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> Two-diode voltage dropper. <S> Wire two silicon diodes in series with the circuit. <S> They will drop 0.6 to 0.8 V each and run the motor at about 5.8 V. <A> Over driving the motor will affect it's life by an unknown amount. <S> It may live for a year, a month, a day, or immediately burn itself out. <S> Longer periods of time mean more heat buildup. <S> A quick fix is to lower the voltage, through the use of two diodes in series. <S> Standard 1n400x should be fine. <S> At 0.7V per diode, that's 1.4 volt drop from the nominally 7.2V battery, giving you 5.8 volts at the pager motor. <A> You should not use any component outside of it's absolute maximum ratings. <S> The effects are not to be underestimated. <S> They are used in all kind of devices. <S> They are wired in parallel with the component to be protected. <S> If the voltage applied is higher than the breakdown voltage of the suppressor diode, it will "breakdown" and conduct so that the voltage will drop and the component is protected. <S> For more information click here <S> However, these diodes are for transient overvoltages, not continuous! <S> So the solution would be to adjust the input voltage. <S> Is the motor directly driven from the battery without any electronics? <S> Then you should use at least two normal diodes in series with the motor. <S> Watch out for the maximum current that is drawn. <S> You could also switch to a 5V Battery-Pack, witch are getting really common with the USB Powerbanks these days.
It also depends on how often you turn the motor on, and for how long each time. A very common way to protect components from overvoltage is the proper use of suppressor diodes or zener diodes.
How does the OR gate work? I'm asking this in reference to multiplexers where there are multiple inputs to the final OR gate. Say I have two different signals to an OR gate. One is at 4V and the other at 5V. So if I'm letting both them signals pass at the same time what should be the reading of OR gates' output? Will it be 4.5V or simple 5V? <Q> simulate this circuit – Schematic created using CircuitLab <S> If your OR gate were something as crude as Figure 1 <S> the output voltage would be the average of the input voltages. <S> This might be good enough for an application that just needed to know if either input was greater than zero volts <S> but it's not good for logic circuits. <S> Figure 2 shows a better OR gate in that either input going high will provide current to the output. <S> The problem is that there's a voltage drop because of the diode and if we were to propagate this signal through several stages the voltage drops would add up making the signal unusable. <S> While not advised for proper logic circuits this can be a handy trick where an OR gate is required in a circuit that doesn't suit logic level chips or where, for some reason, the designer can't fit in a full 4-gate chip. <S> Figure 3 shows a logic chip OR gate. <S> Internally some transistors are used to detect the input signals and, if they exceed a certain threshold voltage the output transistor is switched. <S> This has the advantage that the output can now switch rail-to-rail and that the drive signal has no (or manageable) voltage drop. <S> Each logic family has a 'fan-out' capability that tells you how many inputs an output can drive reliably. <S> Figure 4. <S> CMOS OR gate. <S> Consider how the CMOS OR gate of figure 4 works. <S> If A and B are both low Q1 and 2 will be off (open-circuit) and Q3 and 4 will be on (short circuit). <S> Notice the 'o' invertor symbol on the gates of Q3, 4 and 5. <S> Point 'C' will be pulled high to Vdd. <S> This will turn off Q5 and turn on Q6 giving a low resistance path between Q and Vss. <S> If either A OR B turns on one of the Q3 / Q4 transistors will turn off while one of the Q1 / Q2 transistor turn on pulling 'C' low. <S> This will turn on Q5 and turn off Q6 giving a low-resistance path from Vdd to Q. <S> The point is that the output is ' driven ' high and ' driven ' low resulting in clean switching and the ability to provide enough current to reliably switch any downstream devices. <A> In your case, if the output were unloaded, it will approach the positive power rail. <S> Different logic families and devices have widely differing drive capabilities. <S> Without a part number to identify even the family <S> (I can see it is a 5V part, and that is about it), I cannot be more definitive. <A> It will show output as 5 V because it takes both 5V and 4.7V as high potential. <S> The gate circuits have defined range of lower potential( e.g. 0-2.2V) and high potential (e.g. 3- 5V) to give inputs and outputs in digital format . <A> While the answers you've gotten are all correct, the key message you should be getting is that the answer to your question does not matter. <S> An OR gate has a defined minimum input high voltage (usually just called Vih(min)). <S> Any voltage above this value is considered a logical high. <S> If Vih(min) is 4 V, it doesn't matter if the input voltage is 4.5 or 5 or 4.0001 V, they're all considered logical high values. <S> And an OR gate has a defined minimum output high voltage (usually just called Voh(min)). <S> If at least one of the input voltages is high, and you are using the part correctly (not overloading the output, not using an inadequate power supply, etc.) <S> then the output voltage will be above Voh(min). <S> How much above? <S> It doesn't matter, because Voh(min) is always greater than Vih(min) for gates in a given logic family so the output value of the OR gate doesn't matter, because it will be above Vih(min) for the gate that's connected to its output. <S> As other answers have noted, a CMOS logic gate will tend to pull the output very near Vdd <S> when outputting a high voltage if the load is very light (high impedance). <S> I think nobody has mentioned yet that a 5 V TTL gate will actually not pull its output particularly close to Vcc (maybe around 3.5 V, IIRC), unless the load is really miniscule (greater than 1 megohm, maybe). <S> But this is okay because the Vih(min) for TTL is very low, maybe 1.5 or 2 V (check your datasheet for the actual number).
Provided the inputs to a gate are at valid logic levels , the output will go to the level it can, depending on load current, independently of the input levels.
Question about PCB layers I am designing 4-layered PCB for the first time. Searching web, I learned that normally we place layers in following way TOP|GND|VCC|BOTTOM Will there be problem if i place GND at the bottom? So that it works as below TOP | VCC | BOT | GND ? <Q> Well, it won't be "BOT" anymore :) <S> (It would typically be known as MID or MID2). <S> Keep in mind that it will make probing the signal layer considerably more difficult (you are limited to probing at vias). <S> Assuming a typical stackup where the middle planes are separated by a thick substrate, there really isn't much of a difference. <S> You ought to have slightly better noise immunity and EMI shielding on the inner layer, though. <S> You haven't specified what flavour of signals you are dealing with; as long as it's not too exotic, the effects of a change like this should be negligible. <A> You could do that, but it doesn't make sense to me, unless you have some kind of weird application. <S> Doing Signal-Plane-Plane-Signal <S> gives you: Excellent high-frequency power delivery thanks to the capacitor formed from the two inner planes (think about it -- two large sheets of copper separated from each other by a thin dielectric) <S> Each signal layer has an immediately adjacent reference plane layer, making microstrip transmission line design easy, or more simply, return currents will enjoy a low impedance path barring any weird plane splits What you've suggested will rob the first item, and create an interesting situation for the second where the stack up would dictate which of the two plane layers your signals will use as a reference plane (based on the distance to each). <S> You can always do a large flood pour of GND if you need it for some reason (thermal?) on the bottom layer, but unless you have a compelling reason, I'd stick with SPPS as the four layer stack up of choice. <S> I have read briefly about PSSP stackups, but I was not fully convinced by the few sentences that claimed improved EMI performance by burying all the signals inside; <S> while it's true that solid plane layers would probably help in that regard, I personally don't think it'd be worth the routing hit -- you would have to via everything. <A> With 4 layers if you do have very fast edges, and you need need the inter-plane capacitance, this means you must have your 2nd and 3rd layer very close together (8-10mils) apart. <S> If you are going with a standard 62mil thick pcb, this means your top and and bottom layer are very far from your reference planes. <S> Everything is a compromise. <S> So what you can do is have Layer 1, 2, 3 close together. <S> All your fast edges would be contained on the top layer, so that you have access to a reference plane and have the coveted inter-plane capacitance. <S> Layer 4, would be treated as if it was a single layer board. <S> You would need to run return path adjacent to each signal. <S> So you could have ground on the 4th layer if you choose too, <S> but, it doesn't make sense too if you have a ground already in one of the mid layers. <A> I've always found Henry Ott's techtips a valuable source of information. <S> Long story short: You loose the plance capacitance which is a very low inductive source of power. <S> The faster your rise times are, the more you want that plance capacitance.
In general, this should be alright.
What method do you suggest for prototyping asynchronous circuits? I got surprised and to a degree shocked by finding that there is no proper established tool for designing and prototyping asynchronous circuits. I keep searching using google and other means to find a good method to design VLSI asynchronous circuits, but so far the searches have failed to produce an answer. There are some abandoned tools like Balsa, etc. for automating VLSI designs, but they are totally undocumented and hard to use. What I am looking for is something like FPGAs that we have in the synchronous world. Anyway I appreciate it if you share the name of a reliable tool, and prototyping hardware that ease the burden of asynchronous circuit design. <Q> The Theseus logic NCL has been mentioned, there was also Handshake systems (Philips spin off) as well as Fulcrum Microsystems and Caltech. <S> There was a asynchronous ARM processor called Amulet as well. <S> And SUN Microsystems had a processor design team for this as well for a clockless SPARC. <S> I would call these clockless designs to avoid the confusion between logic design like ripple counters and these types of circuits. <S> But in general either term is used. <S> However, if you have a properly designed library of core cells this top level abstraction/description can become trivial. <S> The core issue is that if you've designed a system that allows each cells to propagate forward a signal that says "result good" as well as propagating backwards "system available" the system self clocks and as such can be simply designed very much like software without concern for race conditions or timing for that matter. <S> So the tools used would be as simple as SPICE for cell level (transistor level) design and C for compiling into a set of primitives to be placed. <S> For the life of me I can't find the C based tool (open source) that was used. <S> Look to people like Wesley Clark (He passed away recently) as well Ivan Sutherland and Karl Fant (mentioned elsewhere too). <A> If a register is clocked with a system clock, it would be considered synchronous. <S> If that same register were clocked directly from a gate, logic circuit or generally anything besides a system clock it would be asynchronous. <S> Altera's registers can be clocked from multiple system clocks, or by logic. <S> You can build whatever type of gate circuitry you desire. . . <S> It's been my experience with most kinds of ASICs or FPGAs that each time it is compiled, something is routed different. <S> Thus propagation delays are always changing. <A> An FPGA is the right hardware. <S> But you won't be able to use the synchronous-focused synthesis software, because it makes the wrong transformations. <S> For example, an FPGA is perfectly capable of forming an oscillator built with an inverter chain. <S> But if you define that inverter chain in e.g. VHDL and use one of the standard compilers, "NOT-gate pushback" will eliminate inverters pairwise and leave you with only one, and the device won't oscillate. <S> You may have to write some of your own synthesis software, which will be possible if you get enough information on the bitstream. <S> I'd look into other research efforts that operate on the bitstream rather than the behavioral description -- <S> things like glitch detection and reliability analyses are highly dependent on the mapping chosen by the synthesizer. <S> Probably some work in the area of redundant fault-tolerant logic has already worked out some custom mapping techniques, since common product term elimination is one of the standard transformations performed by a traditional synthesizer, and absolutely destroys a redundant design. <S> When you control the usage of the FPGA logic element primitives such as lookup tables and local and global interconnect, you'll be able to use the inherent delays to realize your asynchronous design. <S> Your optimization problem is a lot more difficult than fitting with the goal of meeting setup-and-hold times, but that's what makes it research. <A> Depending upon the complexity of your circuit.. <S> If your design is mostly digital, you might look at using Altera's Quartus system. <S> Input your design with graphical and/or VHDL tools using asyncronously clocked registers or use just logic gates. <S> Add dummy buffers, gates, signal pins etc.. as needed to delay signal paths to match whatever you need. <S> (assuming your design is slower than their fastest CPLD gate delays <5 ns) Many years of designing with their chips, I never found an errant simulator result. <S> Smaller designs can be done with their free tools.
DC (Design Compiler) from Synopsys as well Merlin from FTL systems also used to be available.
Does powering a module with 5v instead of 3.3v affect it's logic level? I bought a NEO-6 u-blox 6 GPS Module ( datasheet , pages 9 and 14), which has a standard serial connection. I have a USB to serial converter, which is 5v logic. This board says it can be supplied 3v-5v, and I have seen people say that it is 3.3v logic , but these people always say to power the board from 3.3v. My question is: If I power the module with 5v, will the logic become 5v tolerant? <Q> With no datasheet, it's difficult to say. <S> Generally speaking, V In High max will be VCC + 0.6 maximum. <S> And V Out High will be at least VCC * 0.7 to max VCC. <S> This is true 90% of the time. <S> But you never know of the IC <S> has multiple internal voltage levels. <S> Some ICS may have 5V VCC, which it regulates internally to 3.3 and 1.8, and logic will be tied to the 3.3v. <S> And that module has a sot-23-5 IC, which may be a voltage regulator. <S> So while the module is 5V in, the actual IC runs on 3.3v. <S> So datasheet is important here. <S> Update : The IC's data sheet confirms what I said above. <S> Max 3.6V VCC for that specific GPS IC. <S> The 5 pin sot-23-5 is likely the mic5218-3.3 3.3V low dropout linear regulator. <S> No 5V tolerance at all on the data pins. <A> That depends on the circuit. <S> If it is a 3-5V chip it will in general adjust its logic levels. <S> For your case, the datasheet of the GPS module ( http://www.kayraelektronik.com/download/gps-moduller/NEO/NEO-6_DataSheet_(GPS.G6-HW-09005).pdf ) states that the module itself is 3.6V maximum, so <S> I suspect that the PCb contains a voltage regulator. <S> The electrical specs doe not state that the pins are 5V tolerant. <A> A schematic for a similar (the same?) <S> board is at this website (posted by G4ZFQ, who cited K9IVB as the source of the schematic). <S> It shows that the I/ <S> O pins are directly connected from the GPS module to the board's I/O pins. <S> The module's datasheet states that that the I/O terminal's maximum voltage is 3.6 V, so it would not be 5 V tolerant. <S> The board does contain a 3.3V regulator that is used to convert the input power supply voltage (3-5V) to 3.3V, but the GPS module itself never gets 5V.
If it is a 3V chip with a 3V low-drop regulator it will never really run on the 5V you supply.
Wiring LEDs in series and parallel I am looking to build a light stick. I am quite new to electronics and have just been experimenting so far. My light needs to be portable so I am using a 3.7v LiPo battery (I am into RC so have some lying around). I initially thought that wiring my LEDs in parallel would be best as it requires no voltage boost but I seems like it will use more battery that way. I looked at linking them up in series but I have about 16 and need a voltage of around 51-52V to run them in series. From what I have read this uses fewer amps for better efficiency but I don't know if this is correct. The voltage regulators seem to get quite bulky when you up to a higher voltage. What I was thinking of was creating segments of LEDs in series and put the segments in parallel using multiple step up voltage converters in one project. Would it be ok to use more than one and would you recommend using a pre-made one or incorporate it into my design and have it on the board. <Q> I initially thought that wiring my LEDs in parallel would be best as it requires no voltage boost <S> but I seems like it will use more battery that way. <S> Not really. <S> Assuming you have 16 leds with a 3.2V Forward Voltage @ 20mA, in series you have: <S> 16 <S> * 3.2V = <S> 51.2V <S> 51.2V <S> * 0.02A = <S> 1.02 <S> Watts <S> In parallel, you have 16 <S> * 0.02A <S> = 0.32A <S> or 320mA <S> 320mA <S> * 3.2V = <S> 1.02 Watts Same power usage. <S> But now you have to factor in efficiency. <S> A boost circuit will not be 100% efficient. <S> You are normally looking at 80~90% efficiency with a switching boost circuit. <S> Plus the current limiting resistor used (figure 0.8V dropped. <S> 0.8V <S> * 0.02A = 0.016 <S> Watts (1.02 Watts + 0.016 Watts) / 0.85 <S> = 1.22 Watts <S> And for the parallel leds, we do the same for the resistors used (figure 0.5V dropped). <S> 0.5V <S> * 0.02A <S> * 16 <S> = 0.16 <S> Watts 0.16W <S> + 1.02W <S> = 1.18 Watts <S> In the end, we have the higher voltage, series led string as wasting slightly more power from the battery as the lower voltage, parallel led strings. <A> Two diodes in parallel will have the same voltage applied across them (i.e. forward voltages are the same), but due to differences in the diodes (manufacturing variation, thermal path differences, etc), one will draw more current than the other. <S> The negative temperature coefficient will cause the diode drawing more current to draw even more current, and so forth, eventually causing it to overheat if left unchecked. <S> Boosting from 3.7V to ~50V could be tough, so you may have luck with using a few smaller strings of LEDs. <S> A tri-channel boost LED driver such as the LT3797 would keep the boost ratio (V_out / V_in) reasonable. <A> As you mentioned I step up led driver would be the best option. <S> You could use a IS31BL3508A since it could support 26V led strings, so you could build 2 strings of leds but you need to be sure that the current of the led does not exceed maximum output current <S> (in this boost converter is 20mA).In the datasheet of the converter there are several example circuit. <S> Regards.
It's generally ill-advised to place two diodes (including LEDs) in parallel due to the negative temperature coefficient of a diode - a thermal runaway can result.
Electronics in high temperature - operating 30 mins - 2 hours, up to 500 °F - possible? Would electronics survive if the ambient temperature of the environment was between 120 °C (250 °F) and 260 °C (500 °F) and the operating time was between 30 minutes and 2 hours? After this time the electronics would cool back to room temperature. As others have mentioned, items going through reflow would hit these temperatures, but only for a short period of time. Of course this would be based on "normal" components, not "space grade" items. Would some kind of coating help? Something like High Temperature Epoxy Encapsulating & Potting Compound 832HT Technical Data Sheet . <Q> This is well beyond the ratings of most parts. <S> You can expect outright failures, major departures from guaranteed specs, flaky (eg. <S> partial) operation, huge leakage and so on. <S> Unless you buy qualified parts, you are on your own, so you are looking at major costs, and it may not be possible to thoroughly test some parts without inside information. <S> Downhole instrumentation can at very high temperatures, but parts that are qualified for that operation are very expensive (eg. <S> Honeywell) and have rather disappointing performance to boot. <S> For example, by use of good insulation and a phase-change material. <A> We have to mount electronics on the inside of jet engines (the cooler areas) and we use cooling air fed via a pipe. <S> There isn't an option for us - if we want functionality for more than a few seconds we have to cool the electronics. <S> We use normal temperature rated components. <S> Reflow does create high temperatures but remember the parts are not powered when this occurs. <A> "Would electronics survive? <S> " <S> Yes, if the datasheet says so... Why on earth would the manufacturers do this to you? <S> Why would they jot down such and awful requirement? <S> Because, when the temperature rises the integrated circuits fail. <S> Why do they fail? <S> From the wiki : Electrical overstress Most stress-related semiconductor failures are electrothermal in nature microscopically; locally increased temperatures can lead to immediate failure by melting or vaporising metallisation layers, melting the semiconductor or by changing structures. <S> Diffusion and electromigration tend to be accelerated by high temperatures, shortening the lifetime of the device; damage to junctions not leading to immediate failure may manifest as altered current-voltage characteristics of the junctions. <S> Electrical overstress failures can be classified as thermally-induced, electromigration-related and electric field-related failures <S> Another reason is humidity, get a little water in a small space and then turn the temperature up <S> , you just made popcorn! <S> Water gets into everything. <S> (unless you actually take some prevention, they don't stick the humidity sensors in the IC packaging for no reason). <S> I've talked with other engineers with intermittent failures. <S> The conversation is the same, they forgot to do a few key things like: 1) ESD prevention 2) <S> Humidity control 3) <S> Thermal profile control After they control these things, the intermittent problems go away, if you want to go in the other direction, you will be creating problems for yourself. <S> Would it be acceptable to have a 1% failure rate? <S> What about 0.1% or even 0.001%? <S> You are more than welcome to try it with the components you have, and you are more than welcome to play russian roulette. <S> But be prepared to deal with the consequences. <S> Manufacturers know why their chips fail, they have teams of people and equipment to rip of the epoxy layers and look at their ic's and determine why they fail. <S> Then they write requirements, the absolute maximums and the temperature profile for the IC packaging are a bible for ensuring your components don't fail. <S> Of course you have options, price vs temperature. <S> They make components that can take abuse and have appropriate materials and manufacturing methods to take such abuse. <A> A water jacket will never get hotter than 100°C — at least, until it runs out of water. <S> You would have to figure out how much heat will flow into the jacket from outside during the operating period (thermal insulation will help reduce it) and make sure you have enough water to absorb that amount of heat. <S> You'll need a way to vent the steam, as well.
It's possible to design an electronics package that will survive an external temperature of 260°C for a substantial period of time, by keeping the internal temperature to something reasonable like <125°C, but that's more of a mechanical engineering problem than an electronic one.
Arduino code - Why do we need to read the rising edge? I'm having a difficult time figuring out why we need to read the rising edge instead of using both edges. We are doing a school lab using interrupts in Arduino code, and I understand that we need the interrupt to read the rising edge, but why are we reading the rising edge ? The question that I am specifically trying to answer is part B of the following: Write a paragraph explaining how you are measuring frequency. Be sure to include: a. why you must use an interrupt. b. why you enabled the interrupt on the RISING edge, instead of both transitions. c. Comment on the accuracy of your Arduino's frequency measurements <Q> Obviously, they expect you to answer based on the lab or lecture, not based on actual use. <S> There is no reason you have to interrupt on the Rising Edge instead of the Falling Edge, or the other way round, or both. <S> Or edge interrupt instead of level interrupt. <S> Your design determines which type of interrupt you need. <S> A signal that is active low will need a falling edge or level low interrupt, while a active high signal will need a rising edge or level high interrupt. <S> Someone pressed a button, and you want to do <S> x when it's pressed, and y when it's released. <S> Or you're timing how long the signal goes from one state to the other. <S> Etc. <A> If you want to measure frequency, then you have to measure number of pulses in a specified time or time period between two pulses. <S> In any case you have to start/stop at the same threshold of a signal begin or end, but with use of both edges won't give you the frequency, but the pulse duration <A> You don't have to use interrupts, and you don't have to use only the rising edge for measuring frequency. <S> These are design choices of your system. <S> Think about how you are measuring the frequency. <S> There are two basic ways, counting cycles during a fixed time, or measuring the time per cycle. <S> Think of how you'd do each of these on a human scale if the frequency you were trying to measure was slow enough. <S> The rising edge interrupt is like getting a click every cycle. <S> You could look at your watch, and count clicks for some fixed period, like 15 seconds. <S> The other method would be using a stopwatch. <S> You start it on one click, then stop it on the next. <S> Now consider how getting a interrupt (click) on both the rising and falling edges would mess up either of these procedures. <S> You could adjust for the difference, but what happens if the time from falling to rising edge is different than from rising to falling edge?
Sometimes you want to interrupt on both sides of the signal, like from a sensor or button.
Relationship between Vds and Vgs- MOSFET I've been experimenting around with a MOSFET and was measuring the \$V_{gs}\$ and \$V_{ds}\$ for a range of 0-5V. This is the plot I came up with: Circuit schematic: Now I'm trying to discern a relationship here but can't figure out what the constant value from 0-3.2V indicates. I've found 3.2 V to be the threshold voltage. Beyond the threshold voltage I see that Vds falls dramatically as Vgs increases until it levels off at about 3.8 V. Now I'm not very familiar with MOSFET's which is why I'm simply trying to get a qualitative view of this \$V_{ds}\$ and \$V_{gs}\$ plot. What happens before the threshold voltage? What happens after it? <Q> The MOSFET is like a switch. <S> When your gate voltage is low (0V to <S> ~3.2V in your diagram), it is like the MOSFET isn't even there. <S> The switch is open. <S> You have a resistive voltage divider in your circuit which is determining the voltage across the MOSFET as well. <S> If you take the MOSFET out of the circuit, you will see the same voltage. <S> When you are above Vth, the MOSFET starts conducting and shorts out R2. <S> When the component is shorted, there is no voltage across it (well, a very small voltage across it). <S> If you placed a wire in place of the MOSFET, you would see the same thing. <S> The nearly vertical line that you see there is the change of the MOSFET between 'off' and 'on'. <S> This is the threshold voltage (Vth). <S> Overall, the MOSFET works like a switch. <S> Most people either apply 0V to the gate/source or 10V. <S> There are some applications that work near the threshold voltage, but most are using MOSFETS as very fast switches. <A> There are three basic regions of operation for a MOSFET. <S> Simplifying a bit, they are: Cutoff (Vgs < Vt) -- <S> No current flows from drain to source. <S> Linear (Vgs > <S> Vt and Vds < Vgs - Vt) <S> -- Current flows from drain to source. <S> The MOSFET acts like a voltage-controlled resistor. <S> This region is used for switching. <S> Saturation (Vgs > <S> Vt and Vds > <S> Vgs - Vt <S> ) -- current flows from drain to source. <S> The amount of current is proportional to the square of Vgs, and is (almost) independent of Vds. <S> The MOSFET acts like a voltage-controlled current source. <S> This region is used for analog circuits like amplifiers. <S> In your circuit, R1 limits your drain current to about 1 milliamp, which is pretty small. <S> It looks like it only takes a Vgs about half a volt above Vt to get that much current. <S> If you want to see the relationship more directly, remove R2 and replace R1 with a much smaller resistor, or even just a current meter. <S> This will let you apply a fixed Vds. <S> Be sure to turn up Vgs slowly and carefully to avoid frying the transistor or resistor. <A> The flat line from 0 to 3.2V is the 9.09V <S> your 1K/10K divider provides when your FET is not conducting (Vgs below threshold). <S> The steep dropoff is the region where the transistor is conducting with a resistance comparable to your divider. <S> The flat region above 4V shows where the transistor is largely saturated, and the 1K resistor is doing all the current limiting it can... <S> meaning 10mA <S> off of a 10V supply. <S> You can see this approximate point on the FET curves where Vgs reaches a level where Id>10mA.
The amount of current is roughly proportional to both Vgs and Vds.
Can I connect +5V GND in reverse to get -5V? Why not? Since voltage is relative, if I have a +5V regulated source in my circuit board and need -5V some where, what will happen if I just connected the +5V supply connections in reverse? Note that the +5V is as an example only. <Q> Of course you can! <S> If you have an isolated part of circuit which needs GND and -5V, just connect its GND to your +5V and its -5V to your GND, and you'll be fine. <S> However, if you need +5V somewhere in your circuit and -5V somewhere else within the same circuit, you effectively require 10V between these two points. <S> That's why a single 5V supply will not be enough in a general case. <A> simulate this circuit – Schematic created using CircuitLab Figures 1, 2 and 3. <S> Your understanding is correct. <S> Provided the outputs are isolated (they aren't connected through ground, for example) they can be considered as being similar to batteries. <S> For dual-rail supplies they can be connected as shown in Figure 3. <S> simulate this circuit Figure 4. <S> A dual rail PSU circuit build with positive voltage regulators. <S> (Decoupling capacitors not shown.) <S> Note that this PSU has isolated secondary windings. <S> Link: Voltage and current regulators <S> (Elliott Sound Products, https://sound-au.com ). <A> You are proposing to do something like simulate this circuit – <S> Schematic created using CircuitLab <S> However, you might notice what happens at the common ground - the <S> +5 + and - get tied together and the power supply gets shorted. <S> If R2 is floating, of course, this will work fine, but that is very unlikely. <S> It is exactly the same as taking R2 and connecting it "backwards". <A> If you reverse the whole thing, you can get -5V everywhere. <A> What you have is two wires with potential difference of 5V. Which one you'll call GND and which one + or minus is completely up to you, and has no physical meaning. <S> But if you bring in some equipment that already has defined GND, than you have to be careful. <S> In example, if you take a power supply, you can treat red and black input however you like in your circuit. <S> But if you're gonna use an oscilloscope in the same circuit you better make sure that GND of power supply and GND of oscilloscope are on the same spot or you'll pluck out your fuses and, depending on where you live, burn down the house :)
If you need +5V and GND and -5V, you'll need to do something else. Where you connect the ground is a matter of choice.
Effect of battery internal resistance on its energy efficiency I am trying to calculate Li-ion/LiPo battery's energy efficiency based on its internal resistance (as far as I see from scientific papers, the battery's internal resistance rises with its ageing, meaning that the efficiency must decrease). Discharging: I use the most basic equivalent circuit and load: If I assume that the current is constant (the way they describe it here ), then I get: ef = (Vopen-Ir)/Vopen (probably I should integrate this by time) (I have reduced the current and the charging time in both parts of the fraction). By the same method, the charging efficiency will be: V/(V+Ir) Therefore if we want to calculate the round-trip efficiency (output power/input power), it will be the multiplication of the two, giving (V-Ir)/(V+Ir) Am I doing this right? Does it mean that for every charging/discharging current, the efficiency is going to be different? I guess there are other factor affecting the energy efficiency, although I haven't quite found any formulas. How significant are they in comparison to internal resistance? <Q> Modeling battery charge and discharge processes is a very intricate science. <S> There are many models to estimate the behavior of a battery. <S> Using a internal series resistance can be useful to estimate a rough state of charge as well as the power efficiency when dis-(charging). <S> This model is not very exact thus calculating the charge efficiency the error will integrate over time as well leading to a large total error. <S> To understand the basic behavior of batteries take a look at the Peukert Effect (aka. rate-capacity effect) and the Recovery Effect. <S> In a nutshell: The Peukert Effect describes that one can get more charge out of a battery if discharged with a low (constant) current. <S> The Recovery Effect says that in periods of low/no discharge currents the reduced "useable" charge due to high current loads gets partially replenished. <S> The reasons for both are the chemical processes in the battery. <S> If you want a very accurate model of a battery for your calculations look for electrochemical models ( <S> most notable DualFoil, based on work of Doyle et al.). <S> For easier use with good accuracy <S> the (analytic) Kinetic Barrier Model comes to mind. <S> Also there are more sophisticated electric models filling the gap between the two aforementioned. <S> Edit: Calculating the Peukert constant <S> Given the capacities \$Q\$ and their respective run-times \$T\$ for two constant discharge currents \$I_a\$ and \$I_b\$, the Peukert constant \$k_P\$ can be calculated as \$k_P = <S> \frac{ln\frac{T_a}{T_b}}{ln\frac{Q_b}{Q_a}+ln\frac{T_a}{T_b}}\$. <S> The required values can be derived from the batteries specifications or actual measurements. <A> Am I doing this right? <S> Does it mean that for every charging/discharging current, the efficiency is going to be different? <S> The basic formula is correct. <S> however internal resistance also varies as the battery charges/discharges and with temperature, so with a fixed resistance value it will only be accurate when cycling the battery at low current and over a fraction of its full capacity (and that is assuming you have an accurate measurement of internal resistance in that range). <S> Theoretically each cycle will have lower efficiency than the previous one, but since the battery degrades slowly the effect is very small. <S> I guess there are other factor affecting the energy efficiency, although I haven't quite found any formulas. <S> How significant are they in comparison to internal resistance? <S> Resistance is the only electrical parameter what causes power loss, so if your resistance value is accurate <S> then no other factors need be considered. <S> However in practice that resistance is affected by several factors such as temperature, current, and state of charge. <S> Voltage also varies with charge state, so you need an accurate voltage vs charge curve. <S> In practice, if the battery doesn't heat up significantly you can assume that the charge/discharge cycle is close to 100% efficient. <S> Charging is normally done at relatively low current so this is usually true. <S> Discharge current may be much higher, and then you will see more heating and higher temperatures. <S> However as temperature rises internal resistance reduces, so a continuous high current discharge may be more efficient than a pulsed discharge, even though the battery is running hotter! <S> This is particularly noticeable at low ambient temperature. <A> Charge and discharge (and their efficiencies) are not the same thing, and you cannot use the same number for both. <S> And internal resistance is mostly inapplicable to charging, but it's very useful for discharge. <S> In general, discharge efficiencies are greatest with the battery fully charged, while charging efficiency is least. <S> And vice-versa. <S> Series resistance has two components. <S> The first is the resistance of the electrode structure, which is usually (but not always) pretty negligible. <S> What really counts is the electrolyte's ability to replace exhausted chemicals in the reaction zone with fresh, unexhausted ions. <S> The limited mobility of ions in the solution puts a limit on how much current can be produced, although things like electrode structure also comes into play. <S> So series resistance is actually something of an artificial construct, which (to some degree inappropriately) applies Ohm's Law to the current limits produced by the electrochemical processes. <S> It's a very useful artificial construct, mind you, but it's not "fundamentally" accurate in the same way it is for a standard resistor. <S> Among other things, an exact value for series resistance depends on current level, state of charge of the battery, and temperature. <S> When a battery is fully charged, all of the solution's ions are ready for use (sorry for the imprecise figure of speech) <S> and it's easy to find replacement ions, so the series resistance is low. <S> At very low charge levels, the series resistance goes up. <S> Within limits, most batteries work better at the high end of their temperature range than at low. <S> For charging, the opposite effect occurs. <S> With most of the electrolyte replenished, there aren't many ions floating around to accept the changes induced by the charge current, so more and more of the charge current is "wasted", and charge efficiency goes down. <S> And generally, series resistance is not considered a useful concept when dealing with charging.
Internal resistance increases as the battery ages, so an old battery will get hotter - indicating lower discharge efficiency.
How to control the driving frequency when working with high power ultrasonic transducers I would like to know the simplest solution to dynamically tune the driving frequency of an ultrasonic transducer to its resonance frequency. My amplifier has a power meter that I can read an optimize via LabView but I would like to know if I can do the same using a multimeter and a controller that changes the frequency of the function generator. Can this idea be applied using a multimeter and a controller? I have the following setup and in this question there is more information about the problem : <Q> An idea once I had, but never tried, is to treat an ultrasonic transducer like a high-power crystal oscillator. <S> Your typical crystal oscillator circuit looks like this: simulate this circuit – <S> Schematic created using CircuitLab <S> Simplistically, the crystal (along with the 2 capacitors) provides a 180 degree phase shift at its resonant frequency and this determines the output frequency of the oscillator. <S> So why not try something similar with your transducer? <S> You would of course need to use something significantly more powerful than a little logic inverter, and you would probably need an additional band-pass filter to make sure you don't end up with one of the transducer's harmonics, but I imagine it would look something like this: simulate this circuit <S> You may need to introduce some method of kick-starting it if there isn't enough 'natural' noise in the system to get it going, and the filter may need to have some gain to compensate for the low voltage across the sense resistor. <A> The graph above is taken from this interesting website. <S> I can't determine from your question what application you have <S> but, from the link (in the other question) to the type of transducers you use it <S> seems you will be series resonating the transducer and this means it has low impedance at resonance due to L and C being in series. <S> This means that the type of control circuit will look like this: <S> - Taken from here and this site also has some very useful information and an ebay link to a cheap one: - <S> But, if you are still intent on building your own you can use the series resistance method to generate a feedback signal to the front-end of a power amplifier. <S> Clearly the series resistance need only be about 1 ohm to prevent excessive power losses. <S> The signal will be maximum at series resonance and importantly in-phase with the drive voltage to the transducer. <S> This means a simple power amp will do the job but, with a method of controlling amplitude. <S> Amplitude needs to be controlled or the PA will go into saturation and it may damage the transducer. <S> It's a bit like a Wein-bridge oscillator needing amplitude control to ensure <S> sinewave purity. <S> The fed back signal could be adjusted with a pot but, given the Q of the transducer, this is probably best achieved using a JFET: - Regards the PA itself, make sure that the phase angle between output and input is small at resonance or the transducer will not run quite at perfect resonance. <S> This is usually done by ensuring the PA has at least 10x <S> the bandwidth of the running frequency. <A> It seems you aren't the first person to have this problem. <S> These guys have thought about it and patented a solution. <S> The solution is basically a microprocessor that controls the frequency of the drive signal, and a current detector in the drive circuit. <S> Generate approximately correct signal, then hunt up and down while watching for a maximum of current flow. <S> You could do this by hand using a multimeter with a small adapter. <S> You put a current shunt in series with the drive signal ground, then use a small adapter circuit sort of like this: simulate this circuit – <S> Schematic created using CircuitLab <S> This converts the drive current to a voltage that you can measure with your multimeter. <S> If you had a multimeter that could measure AC current at the drive frequency of the ultrasonic transducer, you wouldn't need the adapter. <S> But, I don't think there are any multimeters that measure AC current for much above typical powerline frequencies. <S> The diagram is much simplified and is only intended to show the concept. <S> You may need to use two stages to get enough gain, and the output filter could be made much better. <S> Given that you mention a 50Ohm drive, you may be up in the MHz range with your ultrasound, so maybe an opamp won't cut it <S> and you'll have to use something better suited to high frequencies instead. <A> This broadband energy excitation will make the transducer oscillate on its resonant frequency. <S> If your multi-meter is capable of measuring the AC frequency content then you have your answer. <S> Alternatively you mention in your linked question that you can measured reflected energy, you could also consider making the measure of the echo energy. <S> Once you have a rough estimation of your resonance you could put an obstacle at a distance equivalent to the time needed for the drive induced oscillation to have died-out and then measure the energy from the reflected wave. <S> When you reach the maximum you will have tuned your driver circuit. <S> Note : this is a very generic answer as your your diagram is also very generic. <S> Low side shunt measurement like in other answers is usually easier to handle but given that you are using a bench top PA it might not be compatible with your setup. <S> I do believe that the above is generic and flexible enough for you to adopt to your setup constraints. <S> Other answers also mention that multi-meter cannot measure frequencies above AC line, while true of there is one that i know of (Fluke 170) going up to 100kHz. <S> Bottom line without knowing your setup giving clear directions is not easy. <S> Last but not least a DMM might not be the best tuning tool, a simple scope would help you converge faster.
It all depends on how you want to drive the transducer i.e. the application: - It appears that you can drive at resonance or anti-resonance which indeed does make it pretty similar to how you would use a crystal in an oscillator. What you can do is a very simple broadband high-energy excitation (i.e. a single pulse).
Why Collector-to-Base currents ratio in a BJT transistor is always greater than 1? In fact, this question has been asked on the EE site , but it's not well-answered. I suppose it might be more on-topic here. According to this answer : Note that the holes injected into Emitter are supplied from Base electrode (Base current), whereas the electrons injected into the Base are supplied from Emitter electrode (Emitter current). The ratio between these currents is what makes BJT a current amplifying device - small current at Base terminal can cause a much higher current at Emitter terminal. The conventional current amplification is defined as Collector-to-Base currents ratio, but it is the ratio between the above currents which makes any current amplification possible. First off, Why collector current increases as base current increase? Is the former causes the later, or the later causes the former, or something else (voltage on electrodes, maybe) causes both? And here is my question, Why collector current always increases more than the increment of base current? Say after something changes, a extra holes are "injected into" emitter region, and b extra electrons are injected into base region. Then why b is greater to a ? <Q> Your Q apparently refers to a NPN transistor ('holes injected into Emitter'). <S> In a bipolar transistor (NPN or PNP; referring to NPN in this answer), when the base-emitter junction is forward biased, current flows. <S> This consists of holes injected from the base to the emitter, and electrons from the emitter to the base. <S> Transistors are constructed (richer doping of Emitter than Base) so that most of the current is carried by electrons rather than by holes. <S> Now, the holes injected into the emitter will find a dense field of electrons (emitter is heavily doped), and so will recombine quickly. <S> This requires replacement electrons to be supplied by the emitter terminal. <S> Electrons injected by the emitter into the base will find very few holes around -- the base is relatively lightly doped. <S> So, a relatively small amount of recombination occurs, although this does require holes and consequent base current. <S> As soon as these electrons arrive at the base end of the depletion region, they diffuse away from it. <S> because the base is thin, this diffusion is 'fast'. <S> Any electrons that diffuse close to the collector-base junction will be swept across that junction (if the collector-base junction is reverse biased), because the field is such that it 'attracts' electrons from base to collector. <S> These electrons form collector current. <S> Thus there are two significant components of base current -- holes injected from B to E, and holes to recombine with some of the electrons injected from emitter to base <S> (there is a negligible additional component of reverse collector-base leakage). <S> While not equal, these values are generally similar (recombination current is usually lower than injection current). <S> Emitter current consists of holes recombining and electrons injected. <S> Because of the structure of the junction, the injection component dominates. <S> Collector current is primarily the injected emitter electron current, minus some small amount that is lost due to recombination. <S> So, because a) at the B-E junction electron injection is greater than hole injection, and b) electron recombination in the base is small, the collector current is a large (say 99 %) fraction of the emitter current -- <S> therefore the base current (which is the difference) is about 1 % of the emitter current. <S> These parameters differ from device to device, with temperature, and with some imperfections and other defects in devices, but the basic principles are consistent. <A> The base-emitter diode carries current from both holes and electrons; for an NPN, the emitter (N type, electrons) <S> current is dominant because the emitteris heavily doped compared to the base. <S> There are lots of electrons in the emitter that move in response to the base-emitter voltage bias, and fewer holes in the base (moving in the opposite direction). <S> The base current must make up the outflowing holes in order that the transistor not lose the base-emitterbias voltage (there aren't any holes in the collector or emitter). <S> So,a roughly proportional base (hole) current must be supplied at the base wire, to the larger emitter (electron) current. <S> A second contribution to base current is the recombination of electrons from the emitter, which also depletes holes from the base, but without moving them (an electron 'falls' into a hole). <S> This contribution is also proportional to emitter current, and is minimized bykeeping the base (P type, holes) region very thin; most electrons from the emitter travel through the region without recombining, and are then in the collectorwhere they are... collected. <S> Both cause loss of base charge, have to be 'replaced' by base current, or the emitter bias (and current) turns off. <S> To summarize: Base current is from holes-to-emitter diode current, plus some emitter-sourced electrons causing recombination events in the base. <S> The base current just restores the base-emitter voltage condition after charge carriers move in to stay, or out and never return. <A> Answering questions: @LvW <S> Just out of curiosity <S> : What if VCE = VBE? <S> C-B pn junction won't be reverse-biased then, so it won't attract electrons in base region. <S> Thus, IC will be zero, and IE will be equal to IB? <S> But the C-B diode is not forward biased. <S> This is an application where the BJT is used as a diode and no "classical" amplification is possible (transition region between saturation and amplifying region). <S> IC and IE are controlled and only controlled by VBE; IB is just a side product; <S> Once VCE is greater than VBE, its specific value does not matter, because E-B junction is reverse-biased. <S> Am I right? <S> It does not matter too much - on the other hand <S> : Look at the Ic=f(VCE) curves. <S> Ic slowly rises with VCE because of the Early-effect. <S> Given VBE, IE is fixed, and as a result, the sum of IB and IC is fixed. <S> When VCE < VBE, what IB and IC are depend on VCE. <S> The greater VCE is, the greater IC/IB is. <S> However, the value of IC/IB is capped by "beta", which is reached when VCE = VBE. <S> " Is this right? <S> In this case (VCE < VBE) <S> the C-B diode is open and there is a small current Ic which has a direction opposite to the "normal" Ic direction. <S> Example: For VCE=0 we have a current Ic which is negative (The Ic=f(VCE) curves do NOT cross the origin!).
Collector current depends on base-emitter VOLTAGE , because that determines the dominant (emitter electrons) current source.
Should I wire my electrical actuators in series, parallel or a combination of both? I'm working on a project that requires four (4) electrical linear actuators to work together. They must be driven at the same speed, simultaneously.Let's assume that I have four identical actuators with the following characteristics: -> 12V DC -> Max draw of 2.5A (Something like these... http://bit.ly/1QXNqve ) These guys are all rated for 200 pounds or more, which is about 4 times the lifting power I need, and so I figure I can drive them all from a single power source, with the following characteristics: -> 12V -> Max 6A (Something like this... http://amzn.to/22m8D7j ) Now, what I'm wondering is whether I should drive them in parallel or in series?My understanding is that running them in series will draw, at most, the max 2.5A, but at a decreased voltage. It makes sense that they would each get about 3V (12V / 4 motors). Would the decrease in voltage only manifest as a decrease in speed, or also a decrease in lifting strength? On the flip-side, running them in parallel will provide them all with the 12V they are designed for, but will limit their current to a combined 6A. It makes sense, then, that they would each get about 1.5A, assuming they are equally sharing the load (6A / 4 motors). Would the decrease in current only cause a decrease in lifting strength, or would it also cause a decrease in speed? I assume that a combination of the two would land somewhere in the middle? (i.e. two motors in series, parallel to another two motors in series) Please let me know if the logic isn't right. Makes sense to me, but my experience is that motors aren't exactly equivalent to resistors or other constant-draw components, particularly when one motor might have a slightly higher or lower draw/load than the others. The application requires four actuators, but only a combined total lifting capacity of about 250#. Speed is not a huge concern, they are only required to travel about 15 inches in roughly a minute or less. The quicker, the better, but it isn't a huge deal. Finally, is one method preferable over the other for maintaining nearly identical driving speeds? These actuators will be responsible for moving a platform, that must remain as level as possible while it is being raised and lowered. If the platform has a concentration of weight on one side/corner, will the opposing motors always drive faster? If I run them in parallel, it makes sense to me that the constant voltage should drive them identically, regardless of load. I am trying to make this as simple as possible, and so want to avoid additional electronics for monitoring and changing speeds with microcontrollers, etc. Just a simple up/down switch, and hopefully everything else falls into place. The single power source is desirable for space/cost savings, as well as any slight variances in output that two power supplies might have.Math, sources, experience are all welcome! <Q> If your actuators are DC brush motors, the question of when to wire in series and parallel will depend upon how you need things to behave. <S> Motors wired in parallel will tend to run at the same no-load speed , which will fall off somewhat with applied torque; motors wired in series will produce the same torque , which will fall off for all motors in the series based upon the total of all the speeds of all the motors. <S> To be a little more precise, a DC brush motor may be pretty accurately modeled as an ideal motor in series with a resistor and an inductor. <S> At any moment in time, the rotational speed of the motor will be directly proportional to the voltage across the "ideal" part of the motor, and the torque on the motor will be proportional to the current. <S> These are both bidirectional relationships, so changing the speed will change the voltage (on the ideal part of the motor) and vice versa; likewise changing the current will change the torque and vice versa. <S> While this model is not absolutely precise, the difference between most practical DC brush motors motors and an ideal model is in fact very nearly equivalent to a series resistor and inductor. <S> If they are not connected that way physically <S> but instead should run near the same speed, wiring them in parallel will cause them to do so, though if they subjected to differing amounts of torque that may cause the rotational speeds to differ somewhat. <S> If you think in terms of the model described above (and if you're running things with unmodulated DC you may be able to ignore the inductance part) it should be pretty clear what is happening and how you need to wire things. <A> I would wire the actuators in parallel. <S> The chances are pretty good that the load on the actuators is NOT equal. <S> The actuator with the greater load will attempt to consume more current than the others. <S> If they are wired in series, the voltage across the actuator with greater load drops. <S> This will cause it to run slower. <S> If the actuators are wired in parallel, the voltage across all the actuators remains the same. <S> The actuator with the greater load will still run slower than the others, but to a much less extent. <A> EDIT: <S> Your actuators are strong enough to lift 250lbs when work in parallel, even a torque will decrease with current. <S> Connecting motors in series would probably make differences in rotational speed (they are not ideal probably).
If your actuators are physically connected such that they will run at the same speed, wiring them in series will balance the torque on them.
Is there a way of powering an incandescent light bulb without AC mains? I'm looking to power a light bulb for a greenhouse project, but I do not have the option of using the AC mains (for safety reasons). Is there a way to power it on using a lower voltage? I have a solid state relay and a bulb and fixture lying around. Should I look into amplifiers? I'm using the bulb to provide heat to the greenhouse (the greenhouse is very small) and I'm looking to control it using an Arduino. <Q> Should I look into amplifiers? <S> Nope, amplifiers are for amplifying electrical signals . <S> What you want is electrical power . <S> The solid state relay is useful. <S> Control it from the Arduino. <S> If you don't know how, search Google and this site for some examples. <S> Let the solid state relay switch on/off a low voltage (12 V perhaps) halogen bulb or a car (headlight) <S> bulb <S> (these are mostly halogen as well). <S> If you cannot use a mains adapter use a 12 V (car) battery. <A> Get a car light bulb, those are rated at 12V. The most powerful ones (H4 or HB2) can output about 100W, but you can easily get less power ones if your greenhouse is really small. <A> You can get incandescent 12V lamps- <S> halogens (eg. <S> MR16) are very common and are relatively efficient. <S> Systems are available for low voltage landscape lighting. <S> The wiring will have to be much thicker than for 120V or 240V lighting- <S> imagine a modest 1000W of lighting at 120V- <S> the current draw would be 8.3A, so you could go almost 100' (200') of inexpensive AWG 16 wire with 50W loss maximum. <S> At 12V, the current draw will be 83A, so for 50W loss you could only go 9' (18' round trip) of thick AWG wire. <S> You could also consider more efficient (lumens/watt) <S> LED lighting, but that opens a can of worms with regard to what wavelengths your particular crop craves.
Power it from a 12 V adapter or transformer with sufficient power (at least as much as the lightbulb needs).
How can I reproduce ambient EM fields inside a Faraday enclosure? I am part of a research team looking at the possible effects of EM fields on psychological function. The experiment calls for participants to perform a series of tests while inside a Faraday tent that blocks most of the radio spectrum. We'd like to have the participants perform the same tests inside the tent, but now while being exposed to roughly the same EM fields that are present outside of the enclosure. If we open the door or a panel to allow EM fields inside, then the participants will know. Instead we need the participants to not know whether they are being exposed to ambient EM fields. Our current plan is to link 2 broad spectrum antennas outside the Faraday tent with a matched pair of antennas inside the tent. We'd have a relay switch in the cable to control the on/off function, and a preamp to slightly boost the signal. I'm not convinced this will work though. What would you guys recommend? It doesn't have to be a perfect replication, just "good enough." We're most interested in reproducing the 25MHz-3GHz range. <Q> Accurately reproducing ambient EM fields inside a Faraday tent is going to be very difficult. <S> I know you say you don't need perfection, but using a couple of antennas is probably going to be way off. <S> Move the subject between tests. <S> Or swap the foil door for a fabric one, and use a divider inside the tent so the subject can't see which is which. <S> If you are trying to reproduce the fields electronically, you'll want several broad-band antennas to cover the full range <S> and you'll want to measure inside and out with a spectrum analyser to verify you <S> 've got it right. <A> You could make two panels - one fake (make of plastic or wood) and one real (with a conductive mesh inside). <S> During the experiment, you'd close one panel and leave the other one open. <S> Your subjects won't know whether the panel which is currently closed protects them from ambient EM or not. <A> My intial thought was: No way! <S> Antennas aren't going to be broadband enough. <S> Then I had a look at some antennas made for emissions testing, and got a big surprise. <S> Take this one as an example: <S> It has a damned near flat response from about 600MHz to 3GHz. <S> If you combine that with another that covers from 25MHz upto 600MHz, then you'd have it made. <S> You'd need two pairs of antennas - a high frequency one and a low frequency one - with one end of each pair inside the cage and the other outside the cage. <S> I wouldn't bother with combining them into one RF cable - you would need RF filters to combine them on the outside and another set of filters to separate them on the inside. <S> Just run a cable for each pair, and switch both pairs on or off as needed - that will be cheaper and easier than anything else electronic. <S> Dmitry's suggestion of a removeable panel in the cage would probably still be cheaper. <S> You might need a small amplifier between the ends of each pair to make up for losses in the cables, or maybe not. <S> You will need to measure the ambient field outside with a broadband antenna (probably twice, low and high band) to verify the levels. <S> If it is too low inside then you need to add an amplifier to each pair. <S> You will need an RF analyzer to make the comparisons, and you'll probably want to include that in whatever paper you write as proof of how closely you matched the inside and outside RF levels and at what frequencies.
An alternative and likely much easier method would be to have two tents, one a Faraday tent and the other similar looking but not - you could cover both in lightweight fabric to hide the distinctive foil/mesh. Some kind of RF graphic equaliser made from amplifiers and notch filters might help tune the system and get them the same.
Surge protection not working on a home projector I have a problem with power supply and no idea of how to solve it. The whole system consists of a projector, an AV receiver, a subwoofer and a PS4. Around 800W. Early when I just got the projector it looked for a while that all was fine and dandy, but later some power disturbances started happening. So exactly what happens is, while playing or watching movies or whatever the whole screen suddenly displays white noise and the sound disappears. this is happening just for a split second, then i see the projector screen looking for a source also for a second or so and then everything starts working again. The PS4 doesn't restart, the game seems to be continuing while this happens. Sometimes there isn't even a white noise seen, just black screen in which projector is looking for source, quickly finds it again and the display is back again. I have noticed that around 80% of the time this is happening because of other electric triggers, for example switching on the lights in a different room or switching the cooker on and off and similar, so i'm just thinking the rest of the cases are reactions as well, i just don't know to what exactly. So I bought a surge protective extension cord and plugged everything in it with no luck. To be on the safe side I tried different circuits around the house - the problem still persisted. Finally with no more ideas of what to try I decided to go for the big guns and got myself an UPS with the capacity of around 1200W as recommended by one quite competent friend. We were convinced this would fix the problem. It didn't. So my question is, does anyone have any idea of what could actually be happening here? Also, does it matter that I use an extension cord to plug all the devices to the cord and then this whole system connects to the UPS via the cord as opposed to plugging each device into a separate port? The load indicator shows that the UPS isn’t overloaded. <Q> The problem is not surges, but rather dips in voltage caused by the other loads. <S> A surge protector or an UPS will do nothing for those. <S> Except, of course, if the dip is low enough to convince the UPS that power has failed, in which case, it will start up. <S> It seems that your projector is particularly sensitive to these dips, and it's going through a reset/startup sequence when they occur. <S> Normally, the filter capacitors inside its power supply would contain enough energy to "ride through" short dropouts and dips. <S> The fix would be to replace the capacitors — or the entire power supply, if it is a module. <S> The alternative would be to get an AC voltage regulator, but these are not cheap. <A> I'd bet it's the extension cord. <S> Since your whole system is being powered through a single extension cord, it's acting like a resistance in series with an inductor, both of which are in series with your system, and when your system asks for a lot of current momentarily, the extension cord's impedance causes a large voltage drop across it [the extension cord] which causes a momentary brownout at your system which is enough to reset it. <A> Not all receptacles on a UPS are powered. <S> Some are just surge-protected. <S> So make sure you plugged into the right ones. <S> Also, I would try to put the UPS right behind the equipment instead of having an extension cord btw the UPS and the equipment. <S> If that's not the issue, you'd want to look at how often this happens. <S> If it happens fairly often, you can try just unplugging one of the devices at a time and let the rest of the system run and see when it stops happening. <S> All that being said so you have have some things to try, my gut feel based on what you've said so far is that there's a problem with the AV receiver honestly.
It's possible that the capacitors in yours have aged to the point where their capacity is severely reduced.
How does potentiometer influence incoming current in the circuit? As you can see from the picture, shouldn't the 100 Ohms be the only current influencing factor at whatever the voltage it is supplying? How does Potentiometer on the right side cause decrease in current when I am measuring the current even before current crosses potentiometer? For example, at 4 V , 100 Ohm (fixed) on the left / 1000 Ohm (potentiometer) on the right I measured 3.59mA Then at 4 V , 100 Ohm (fixed) on the left / 2000 Ohm (potentiometer) on the right I measured 1.88mA Am I wrong? Clearly experiment say so... <Q> If the potentiometer is set to 1000 ohms, the total resistance will be 1100 ohms, so, with a 4 volt power supply, the current will be 3.6 mA <S> (I = E/R = 4/1100). <S> If the pot is set to 2000 ohms, the total resistance will be 2100 ohms, so the current will be 1.9 mA. <S> (Your numbers may vary a bit, depending on how accurately you set the pot, and on the resistance of the ammeter). <S> Kirchoff's Current Law says that in a simple series circuit such as this, the current is the same at all points in the circuit. <A> In a simple series circuit, the voltage is proportionally divided by any elements in the circuit, while the current is the same at all nodes. <S> In a series circuit, you will not see different currents before or after an element (in this case the various resistors). <S> In a simple parallel circuit, the current is divided by the parallel nodes. <S> It's not like water, where a large hole (the 100 ohm resistor) will allow a lot of water through through it, and <S> then a small hole (the pot) will only allow a small part of that water though it. <S> The Pot essentially sets the current through the entire circuit, even though it's after your measuring point. <A> You're not wrong. <S> :) <S> The total resistance of resistors in series is the sum of the individual resistances, so in your first example that total resistance would be: \$ <S> Rt = <S> 100\Omega + 1000\Omega = 1100\text { ohms}\$. From Ohm's law, we have: $$ <S> I= \frac{E}{R}$$ <S> So, knowing that E equals 4 volts and R = 1100 ohms, we can solve for the current through the string like this: <S> $$ I = <S> \frac{4V}{1100\Omega} <S> \approx 3.64 \text{ milliamperes} $$ <S> And, in your second example with a total resistance of 2100 ohms, we have: $$ <S> I = <S> \frac{4V}{2100\Omega} \approx <S> 1.9 <S> \text{ milliamperes} $$
The current in the circuit will be determined by the total resistance in the circuit.
I2C data line not having correct voltage levels I've been working on a project with a EFM8UB1 development board and ATECC508 I2C peripheral. Everything works fine but I'm having problems with I2C when moving to a PCB. Here is a trace of the clock Here is a trace of data My scope isn't very good but notice that the voltage levels for data are incorrect. It's not fully driving logic 0 low. Here is an excerpt from my schematic: I'm not sure what to do or what could be causing the problem. Does anyone know of something that could be causing the issue? Microcontroller Datasheet I2C Peripherial Summary **Update Turns out the problem was because I mislabeled the pins on a PCB footprint early on in the project. Thanks for the feedback and suggestions. <Q> I2C lines must be pulled to Vcc with external 2k2 resistors. <S> Normal state (when there's no communication) of SDA and SCL is high (voltage near Vcc level). <S> I2C ports in all devices works as open drains (OD): <S> master pull down and release SCL line to send clock signal, slaves and master pull down SDA to send logical '0', and release it to send logical '1'. <S> Your circuit should look like this: <A> Both SCL and SDA operate as open collector (open drain) outputs. <S> Depending on the speed and of the bus, the resistors should be 10k or lower (usually 4.7k, 2.2k work fine). <S> The open drain SDA allows to utilise some of the I2C most prominent features. <S> Apart from the obvious like sending data bidirectionally, there is also data acknowledgement (each octet of data is acknowledged by the receiving side), <S> slave detection (acknowledgement of slave address) and bus arbitration (when multiple masters try to access different slave devices, the slave device with the lower address goes first). <S> But why make the clock as an open collector as well? <S> Well, one of the less known features (and less used) is clock stretching. <S> A slave device may pause a transmission by holding the SCL line low. <S> That stops the clock, but doesn't break the transmission. <S> The slave has time to prepare a response and releases the clock when ready. <S> Funny thing is that the standard does not define the limit of this holdup, so in theory the slave may halt the transmission indefinitely. <A> SCL on microcontroller was connected to a N/C pin and SDA was connected to VCC. <S> This explains why the SDA pin was not able to fully drive to GND while SCL was correct. <S> I should be sure to double check footprints and datasheets when transitioning from a dev board to a PCB. <S> Other answers are good to consider for debugging I2C problems.
Turns out the problem was I mislabeled the pins on a PCB footprint early on in the project.
Digital Circuit to Check for Majority I have 31 digital inputs (each is high or low) and want one digital output which is high only if at least 16 inputs are high. How can I implement this "majority" function (which is also the most significant bit of the sum) with the fewest MOS transistors? As a bonus, I'd also like to know the fastest-in-worst-case implementation. <Q> Configure 31 MOS transistors as switchable current sources which feed a 32nd which is configured as a current sink at 15 times the source currents. <S> Then observe the voltage of the summing node. <S> EDIT - I should not play games. <S> The title said "Digital", so it's digital we'll go. <S> One configuration would be simulate this circuit – <S> Schematic created using CircuitLab <S> This handles 15 inputs. <S> Duplicate it and add a 4-bit full adder and Bob's your uncle. <S> I haven't shown the last adder, but you should be able to figure it out for yourself. <S> And, of course, if you build this with BJTs, as with TTL or ECL logic, there will be no MOS transistors used at all. <A> Clock the serial output of the last register into a binary counter such as the 74HC4024. <S> Use another 74HC4024 counter to keep track of when 32 clock pulses have occurred which then repeats the cycle. <S> For some crazy reason the original CD4024, and the follow-on 74HC4024 started numbering their flip-flops with Q1 instead of Q0. <S> Very confusing. <S> So I am showing the NXP part (HEF4024B) instead which corrected this anomaly. <S> So every 32 clock pulses (when Q5 of the second counter goes high) <S> , if at least 16 inputs were high (meaning Q4 of the first counter is 1), then this status is latched into a D-type flip-flop (74HC74) and remembered until the next set of 32 clock pulses complete. <S> Meanwhile the inputs are reloaded in parallel to the shift registers. <S> This is somewhat of a special case, in that the majority threshold is a power of two, so only one pin (in this case Q4, representing 16-31) has to be queried. <S> If instead the threshold was 14/27 for example, an address decoder would need to be added, to separate out the values 14 and 15 in addition to 16. <S> With a 90 MHz input clock, there will be a maximum of 355 ns delay from a change in the input until the update of majority status at the output. <S> Note -- not all "glue logic" necessarily shown, but this should get across the idea. <A> It seems like you are confusing a few concepts here. <S> You are saying you want a majority voter circuit. <S> In which case you could make a very large truth table and reduce the circuit using reduction techniques. <S> For example: Input / Output 000 <S> / 0 <S> 001 <S> / 0 <S> 010 <S> / 0 <S> 011 / <S> 1 <S> 100 <S> / 0 <S> 101 / <S> 1 <S> ... <S> so on and so forth. <S> Then when you have the final logic gates, simply disassemble the gate into your transistor count. <S> You are not going to be able to do this for 32 bits unless you have a lot of time on your hands. <S> Assuming it took you 1 second to do each combination it will still take you 2^32 seconds, or 136 years. <S> Having lived this long you would then have to do some type of gate reduction. <S> When you say "The most significant bit of the sum" this is a little confusing. <S> Consider you have 3 inputs that are high, the most significant bit of the sum is 1, likewise with 2. <S> If your majority counter had only 3 bits, this information wouldn't tell you anything. <S> "The fastest in-worst-case implementation" sounds like the sum of the gate delays that would happen in the worst case realization; the most cascaded realization would have the largest delay. <S> Hint: Latches and Flip flops are also made out of logic gates, which are also made out of transistors. <S> Maybe you could make a shift register and a counter, then use a little combinational logic on the output of the counter. <S> Good luck.
Connect the 31 inputs (0-30 in the diagram below) to four 8-bit parallel-in, serial out shift registers such as the 74HC597 which are cascaded in series (only two shown).
Are the instructions fetched from RAM or ROM in an ARM micro-controller? In many tutorials regarding ARM CPU registers, the instruction register is mentioned in such way: "Register R15 in ARM micro-controller is the program counter and it points to the next instruction to be fetched from memory." But I also read that in Harvard architecture, there is instruction memory and data memory. What I understood or misunderstood from this was the instruction codes are stored in flash ROM, and the data is stored in RAM. But when I read more about it, I get the impression that the instructions also are fetched from RAM instead of ROM. Does ROM have nothing to do with the whole operation besides storing the machine code? edit: Question assumes no operating system, just a standalone ARM micro-controller. <Q> ARM cores are actually what is called modified Harvard architecture . <S> In this case, ROM and RAM sit in the same address space, so an ARM processor can execute code out of either or access either as data. <S> In the modified Harvard architecture, the processing core is directly connected to two separate instruction and data caches. <S> This allows for high performance due to being able to access instructions and data simultaneously. <S> At a high level the combined address space makes the whole system act like a Von Neumann architecture. <S> The structure of the address space is determined by the physical connections between the processing core itself and any memories and peripherals. <S> For the most part the layout of the address space will be fixed with particular ranges corresponding to memory-mapped peripherals, mask ROM, Flash, RAM, etc. <S> However, there may be some ability to remap certain sections of address space or to configure peripherals or external memories to sit in specific regions of the address space. <A> The ARM architecture is a von Neumann architecture , not a Harvard architecture. <S> That means it uses a unified address space for both instructions and data. <S> Whether any particular address contains RAM or ROM is up to the system designer. <S> Instructions can be fetched from either. <A> ARM microcontrollers, the newer Cortex-M ones, have modes to operate either from RAM or ROM. <S> Check, for example, BOOT0 and BOOT1 bits in STM32's Reference Manual . <S> This pretty much configures the start address of execution by actually aliasing the actual memory addresses into a fixed memory address segment ( 0x0000 <S> 0000 to 0x0007 FFFF ). <S> The Cortex-M core has four types of "memory": RAM, ROM (flash and System Memory), FSMC and peripheral. <S> RAM is, well, internal static RAM. <S> ROM is internal flash or the unwritable embedded bootloader. <S> FSMC allows using external memory (both RAM and ROM, depending what the external hardware is). <S> Peripheral memory are specific registers that map to peripheral functions, like UARTs, ADCs, etc. <S> The hardware to access them is separate to gain speed (Harvard-style), specially because flash is slower than SRAM. <S> Yet, all of those are unified in a single address space (Von Neumann-style), which makes access to them simplified from a programmer's perspective. <S> They differ only by their address ranges (implementation-dependent). <S> The BOOT pins allow configuration between three start addresses: one in RAM, one in ROM (read-only bootloader) and another one in ROM (flash). <S> This makes it possible to answer your question as "both" . <S> I never tried, but it appears to be possible to jump from one segment to another. <S> Keep in mind, however, that those memories still cannot be dealt with equally. <S> You can't arbitrarily write to ROM. <S> It's flash, making them rewritable, but you must use a special procedure to write to them (it's usually called something like "using flash as EEPROM" to store program data, or "bootloader" when storing program code, usually at startup). <S> There's usually some startup code <S> that copies data from ROM to RAM (initialized global variables, usually stored in the data section ). <S> The Reference Manual also carries this info When booting from SRAM, in the application initialization code, you have to relocate the vector table in SRAM using the NVIC exception table and offset register. <S> which is usually carried out by the startup code. <A> Regardless of the Architecture, R15 will hold the program counter address. <S> The next (hopefully valid) instruction will be fetched from this (hopefully valid) address. <S> Note that it is possible for either of these to be problematic. <S> 1) invalid op codes.2) <S> Memory of any kind may not be installed for the address. <S> Yes it is desirable for areas reserved for data not to be executed as program code. <S> The address space we speak of is bounded (it is not a windowed address space as used in PC's, midrange, mainframe etc.) and may contain RAM, ROM, Peripheral Ports. <S> It should certainly be possible to fetch instructions from RAM if we have RAM for the addresses we consider to be valid for instructions, so code can be loaded if we desire, overlay style. <S> Even without an OS.
Curious info: When one executes from flash (ROM), the instructions are retrieved directly from ROM.
LED with variable wavelength Is there a type LED with adjustable wavelength? I need a light source with variable wavelength between 400nm and 700nm <Q> According to https://en.wikipedia.org/wiki/CIELUV#/media/File:CIE_1976_UCS.png <S> you only need a range from about 450 to 540 nm, if you can combine with a 400nm blue and 700nm red (at least if the light is being viewed directly rather than reflected). <S> Certain types of solid-state lasers have a wavelength that is modulated by a control current, but maybe by 5% (some tens of nm) not almost 2:1. <S> You can adjust the wavelength of an LED by changing the temperature, for example, with a Peltier device, but again even 450-540nm, let alone 400-700nm, is not likely possible. <A> LED ('Solid state device') is made up of a certain material with a specific band gap which determines the the wavelength of photon emitted. <S> So there can be no such LED emitting all visible colors. <A> The wavelength of a LED "filament" depends on the matter that it's made of and it's a static property — if you except the fact that some LEDs can change colour, e.g. green turning orange, but it requires currents much higher than rated and is a destructive operation. <S> Note that RGB LEDs do cover most of the visible spectrum , i.e. 390 to 700nm . <S> Resolution is not very big however. <S> EDIT : <S> This question has been asked on physics.stackexchange.com , too.
Your best bet is probably a broad-spectrum source such as an incandescent bulb with a monchromatic optical filter.
Powering Arduino from a voltage divider I'm making a voltmeter for the college senior project, and the problem is that voltage divider should equally split 17V into two 9V(nominal value of the batteries. As of now, an actual voltage is 8.5V and eventually will go down as the batteries die, so don't freak out too much by my round errors.) sources, but as soon as I connect Arduino board to the circuit, voltage( across the resistor it's connected to) drops to 2-3 Volts. Which is apparently enough to power the micro controller, but not enough to light the display up. (it's also enough to use the other resistor as a space heater with 14 Volts across it). Please feel free to look at the schematic of the device provided above.(either models, or specs of the components are written on the schematic) Now I'll explain the design thoroughly:The idea is to use piezoelectric element (thin laminated flexible piezoelectric beam), place it into a fluid flow (air) and rectify the outgoing AC signal with a full wave bridge. The next step is to amplify the signal, because it has very low magnitude and arduino analog pin can't register it without amplification. Amplifier needs at least +-7V supply, so it's provided to it from the two 9V batteries, that are connected to a voltage divider. Therefore, the virtual ground of the circuit is in between the two batteries. At the same time I want the arduino board (Actually it's not arduino, it's arduino compatible board dubbed Pro Micro 5V) to be powered from the same two batteries. The whole thing works fine from USB, but from the batteries it works as it pleases! Here are the design constraints of the device: The entire device shall hang somewhere in a remote location, so it has to be powered from an autonomous power supply (two 9V batteries in my solution). The device should be as light, and as small, as possible. (solutions like adding another battery are not desirable). Arduino board have to have 5V on it to light the display. Additionally I'd like to learn why micro controller has variable input resistance and what it depends on. I've tried to figure out what is the resistance across the arduino's ground and RAW by assuming that the resistor of the voltage divider and the arduino are connected in parralel. In three different setups I've got the arduino's resistance to be equal to 221.93, 527.73 and 743.4 Ohm. As I understand, the board has some kind of reducer at the voltage input that prevents the board from burning at supplied voltages above 5V and below 12V, but why it drops the supplied voltage from 8V to 2-3V? <Q> Rather you should look into a step down voltage regulators. <S> They're inexpensive and will easily triple your battery life. <A> The two 4.85K ohm resistors that you picture in your diagram are NOT a voltage divider. <S> They are actually performing no function of value and are in fact putting an unnecessary load on each of the two batteries!! <S> You have another fundamental issue with the design. <S> The AC to DC bridge rectifier you have in the sensor output path requires a total signal of at least 1.2 to 1.4V amplitude before the output will register anything. <S> If the sensor signal is small you should AC couple it into an amplifier first to gets its level up to well over 1.4V before you try to rectify it. <A> You should be measuring how much current is required by the MCU board and display module. <S> There is a very good chance that the current is too high to be delivered from a 9V battery without the internal resistance of the battery causing a huge internal voltage drop. <S> The current could be relatively high in the case that the display backlight is in use. <S> LED backlight on an LCD display could take 100mA or more depending upon its configuration. <A> A voltage divider provides a fractional voltage, but does not provide power. <S> Since your board accepts up to 12 Volts, requires at least 7 volts, you correctly figured out that 9V would do. <S> But why aren't you powering it straight from a pair of 9 volt batteries in parallel ?! <S> Why are you first making a higher voltage only to then find out you need to reduce it? <S> (If you look at the high-voltage lines that form the power grid, that is a case where you actually do create a higher voltage on one end and reduce it at the other end. <S> But we use AC for that, which allows us to use simple transformers. <S> You need DC here, and your power supply isn't kilometers away)
You shouldn't power anything from a voltage divider since the output voltage will fluctuate greatly with the amount of current sourced.
Direction of winding (clockwise/anticlockwise) in flyback transformers Here is a circuit of offline switching power supply based on Viper22a IC: In this circuit and many other similar circuits, they denote the starting of a winding by a "dot". However they never mention the direction of the winding - clockwise or anticlockwise. Does the direction of winding not affect the circuit in any ways? I tend to believe that polarity of induced voltage should be dependent on the direction. In case the direction of winding changes the polarity, is there is a standard winding direction to be followed while making the transformer? Example - Does the circuit given above assumes this pattern?: Primary: start at 2 - End at 1 -> Clockwise Auxiliary: start at 3 - End at 4 -> Clockwise Secondary 1: start at 5 - End at 6 -> Clockwise Secondary 2: start at 8 - End at 7 -> Clockwise <Q> was also wound clockwise. <S> So all winding wires with dots produce voltages that are in phase. <A> There are several factors that could be thought to affect the performance of a multi-winding transformer. <S> a) <S> the handedness of each winding b) <S> the polarity of each winding with respect to the others c) where each winding starts and finishes Handedness <S> In practice, the handedness of each winding, CW or CCW, cannot be specified, as there is no reference! <S> If you hold the core this way up, any given coil may be CW. <S> However if you hold the core the other way up, it would be CCW. <S> Which is right? <S> The magnetic circuit is a loop, it doesn't have a 'this way up' end! <S> If you take the first winding as the reference, and record whether the other windings use the same or opposite handedness, then there is no observed influence on the performance of the complete transformer (but see caveat below). <S> Polarity <S> If we choose one coil as a reference, and then record whether each other coil is wound in the same of the opposite direction, then we have to define which polarity of the reference coil we are taking as reference. <S> This needs one of its wires marking somehow, and a dot at the start is as good as any other. <S> A dot at the start of each other winding completely defines their mutual polarity. <S> Starts and finishes For some transformers, where inter-winding capacitance is important, it is necessary to know which end of a winding is nearest to another winding, or to the core. <S> For the first winding to go on, we know the 'start' wire will have highest capacitance to the core, and the finish wire have highest capacitance to the next winding, or the interwinding screen. <S> Caveat <S> We are talking about a transformer here. <S> This is a structure that is very small compared to a wavelength of the signals passing through it, and that has a pair of multi-turn windings linked by a high permeability magnetic loop. <S> If we deconstruct the transformer to such an extent that it is a few turns of wire with a manifest helix structure with no loop of high permeability magnetic material, and drive it with a high frequency signal such that the windings work like antennae, then the full geometry of the windings will affect the performance slightly, including whether the windings are wound in the same or opposite direction. <A> Your explanation of handiness is wrong, a CW wound coil is a CW wound coil, even upside down, same for CCW. <S> Consider a CCW coil wound on a straight core; if the top end comes over the top from the right, and the bottom end leaves on the bottom over the top and off to the left. <S> Flip it around, and nothing has changed. <S> When you leave the power connected and rotate the coil, the magnetic field stays aligned in the way that you describe, but its the direction of current flow that is being rotated. <S> If you disconnect the coil, flip it, and reconnect it in place, then the direction of field is consistent because the path that the current takes doesn't change.
Clockwise or anti clockwise is unimportant providing all windings use the same method - the dot tells you that if a wire was wound clockwise the corresponding wire on a different winding
What circuit I need to lower 20MHz 3.3-0V square wave to 0.01-0V square wave for very low load driving? My FPGA board outputs the 3.3-0V square wave at 20MHz. I need to convert it to 0.01-0V square wave to drive the very low load (0.1 Ohm). What kind of circuit I need and Is there anything I need to be careful? <Q> However, your application requires a very large turns ratio (330:1) that is not likely to be available commercially. <S> Perhaps you could come up with a circuit that cascades 2 (18.2:1), 3 <S> (6.91:1) or 4 (4.26:1) transformers. <S> For example, Pulse Electronics has a 4.25:1 transformer intended for ADSL applications that would get you very close to what you need (326:1), but you'd need 4 of them. <A> You need to be able to source 100 mA (10 mV across 0.1 ohms) so that rules out most op-amps or simple voltage dividers using resistors. <S> I would attenuate the signal using a resistor divider to give the 10 mV peak signal then use an op-amp and BJT configured to drive that sort of current <S> : - Vin is the attenuated signal and Rload is the 0.1 ohm load. <S> Things to be careful about: <S> - Input offset voltages of the op-amp can introduce a significant error so choose the device with care <S> You will probably need a dual supply with some op-amps <S> Choose a fast op-amp <S> Get LTSpice and simulate it. <A> Do you care whether the square wave gets inverted? <S> The simplest thing to do would be to use a transistor buffer to drive a constant current through a resistor to control the output voltage, something like this: simulate this circuit – <S> Schematic created using CircuitLab Note that Q1 is a high-frequency transistor with a low C BE to minimize the Miller coupling. <S> C1 improves the transient response (especially at turn-on) by increasing the AC gain. <S> Run the simulation to see the results. <S> R1 is a standard 1% value that gets you very close to exactly 10 mA through the transistor, and exactly 10 mV across R2. <S> Note, however, that this circuit will NOT drive 100 mA into a 0.1-Ω load. <S> This is based on your statement that you don't need that — but then your stated requirements are inconsistent. <S> Please clarify. <S> EDIT: <S> Actually, the following variation, which basically scales everything by a factor of ten, performs fairly well at 100 mA. <S> This assumes the load resistance is an accurate 100 <S> mΩ <S> simulate this circuit <A> It sounds like you're trying to control the current through your load. <S> Why not simply add a series resistor? <S> The problem with very low resistance loads is that even a tiny difference in either the load impedance or the output voltage will cause a large swing in the current going through the load. <A> You could do something like this: R1 represents the output impedance of the FPGA. <S> If it's much higher than that, buffer it with a CMOS single-gate buffer. <S> simulate this circuit – <S> Schematic created using CircuitLab
You're only talking about 1 mW of power, so my first thought is that a pulse transformer of some sort would be the best approach.
Questions about automotive electronics PCB layout I am a student of vehicle engineering and I love automotive electronics very much. Right now, I have an ESP sensor cluster to study, and I am lucky to have a look at the PCB inside. I have some questions about it. First, why don't they use teardrops at the trace/via junction? Second, why do they use only one trace to connect the resistor to the ground? I am new here and my English is not so good. <Q> why they don't design the teardrop in the PCB? <S> Teardrops are used to reduce the chance of acid traps and perhaps reduce mechanical stress on the trace and via. <S> However, they are not typically necessary. <S> See <S> "Why are there teardrops on PCB pads?" <S> for a more in-depth discussion. <S> why they use only one trace connect to the ground? <S> I'm guessing you mean: <S> why isn't the entire pad connected to the ground fill? <S> This is called a "thermal" , and it reduces the chance of uneven heating on the component during reflow, which can cause tombstoning . <S> It also makes it easier to rework, should that be necessary. <A> 1) <S> We never design them in except if a customer explicitly wishes so. <S> There is no real advantage to it unless you expect lots of mechanical or thermal stress on that via. <S> 2) <S> This looks like some kind of feedback network to me. <S> There is no large current requirement here. <S> If this was a Through-Hole Part, it would be done similar to this for thermal reasons. <S> Parts can be desoldered/resoldered easier if the heat stays located in a small area than when you have to heat up the surrounding ground/power plane. <S> I just wonder where the resistor at the very bottom is connected to. <S> It's interesting that they did not cover the Vias, maybe they use them as testpoints <S> but I dislike this because the test may put additional stress on them. <S> Better add dedicated testpoints. <A> Teardrops may not have been supported by their PCB design software. <S> I use Eagle, and they are not available.
The fabrication processes do not explicitly require teardrops. There's little point in adding the extra + connections to the resistor, especially if there is little current through it.
CAN bus test via loopback - possible or not? I want to test the integrity of the CAN bus (also my CAN transceiver) periodically using the following method (independent of the other nodes). Assume two nodes always exist (node A and node B). When node A transmits a message on the bus, node B will receive the message. But can/will node A also receive the message so that I can confirm that the data has gone to the bus and has been received back uncorrupted? I'm in confusion as to whether the transmitter can be used to receive the same message it transmits, so that the health check of the bus can be independent of the other nodes. Though the message that will be sent on the bus will be received by the other nodes, it doesn't matter, I'll discard such messages in software / acceptance filter, etc. in the receiving nodes. Whether this method can be used in addition to all the error mechanisms, CRC, etc. provided in the CAN specification itself or are there any other methods for checking the integrity of the CAN bus via software (like internal loopback, echo test, etc.). I'm using the RM57 microcontroller BTW (if it helps). Thanks <Q> All CAN hardware error checks are performed by hardware - the CAN controllers. <S> A CAN node cannot ack itself, by design. <S> It will not receive its own messages. <S> CAN is meant to have at least two nodes to work - <S> a CAN bus with just one node is considered faulty, it is common sense. <S> Therefore it doesn't make sense to have an "independent check". <S> (With one exception: if you enable a debug mode called "loop-back", which most CAN controllers support. <S> This is used for trouble-shooting and simulation purposes only and isn't meaningful to use in the final application. <S> Typically you use this when you were cheap and only bought 1 evaluation board etc.) <S> The validation of message reception is done by the receiver acknowledging the message by setting the acknowledge bits in the CAN frame. <S> http://www.can-cia.org/can-knowledge/can/can-data-link-layers/ . <S> This is done by the CAN controllers and nothing your software driver or application-level protocol need to concern themselves with, apart from checking the error flags after each transmission. <S> Now if you want to know if the right <S> CAN node received the message, rather than any node, then that's another story not related to the hardware health of the bus. <S> Such things have to be sorted in the high-level protocol. <A> Some CAN controllers have loop-back mode. <S> However they do not send message over the line, they handle sending and receiving the message internally. <S> RX line is ignored, that means cannot take any messages from other nodes. <S> I assume you do not want to switch between normal mode and loop-back mode. <S> I suggest you to implement a higher level protocol to trace and correct errors. <A> You don't really need to do this, because your transmitted message must be acknowledged by one of the other nodes. <S> It cannot ack itself. <S> So, if there is no connection, your node will retransmit until it hits the error limit and then go bus-off. <S> You would do better to detect the bus off state and then devise a way to recover from it.
Loop-back is a feature which allows the CAN controller to speak with itself without actually sending anything out on the bus.
MOSFET Drain-to-Source Leakage Current over voltage and temperature I'd like to know what's the characteristic curve of Idss in N-Channel MOSFET transistors over different Vds values at high temperature. As I've checked datasheets (like Fairchild's 2N7000 or FDC637BNZ), the Idss parameter is normally specified for Vds voltages very close to BVdss (breakdown voltage). It is clearly stated that current will increase considerably at higher temperatures, but the effect of Vds is not mentioned. I've also read some application notes (like NXP's AN211A or Fairchild's AN-9010) and they do not provide much information about it either. I have this design where I need to keep Idss within the range of the low µA @ 100°C (the lower, the better), and I've thought I could get away with a transistor that has a much larger BVdss than the working voltage (e.g., BVdss=100V, working voltage=3.3V), but it's unclear if that would be of any effect. The question is: is the Idss parameter solely dependent of temperature, or the Vds/BVdss ratio would play a role with it? <Q> The leakage current and the breakdown voltage Vds_max are not so related in my opinion. <S> Leakage current can be lowered by an increased gate doping which in turn increases the MOSFET's threshold voltage. <S> The switching MOSFETs you're looking at are designed to have a low threshold voltage and are thus more "leaky". <S> The breakdown voltage Vds_max is related to the doping profile of the drain, it has no direct relation to leakage as far as I know. <S> A high breakdown voltage means a more lightly doped drain and this could result in a higher Rds_on. <S> You could search for a MOSFET with a high threshold voltage, maybe it will also have lower leakage. <A> EDIT: <S> This answer might not be relevant after all; see jp314's comment and answer for details. <S> With further investigation, I've found the answer. <S> I've found this in AN73-7 from Siliconix. <S> I've even found that NTA7002N from ON Semiconductor has 1µA leakage @ <S> 85°C, which might work for my design. <A> IDSS for a MOSFET with VGS=0 does not change much with VDS. <S> In theory, once VDS exceeds about 100 mV (from exp(VDS/kT-1) ), there is no further increase in ID. <S> However depletion effects (analogous to Early voltage in BJTs), and some interactions in the device do cause small increases. <S> As the VDS approaches BVDSS (say within 20 % of actual BVDSS), the current does increase further due to avalanche multiplication. <S> A FET with given RDSON will generally have higher IDSS for a higher BVDSS device (mostly because it will be a larger device with lighter doped junctions). <S> IDSS also depends on the gate voltage (e.g. applying a negative voltage to the gate will further decrease the current), and more specifically the difference between gate voltage and threshold voltage -- i.e. how much lower than VT the gate voltage is. <S> Thus if you could arrange to bias VGS = <S> -200 <S> mV, you would have very significantly decreased currents -- <S> but generally this is complex to achieve. <S> Note that the 200 mV parameter depends on the construction of the device, including the gate oxide thickness. <S> Thinner GOX devices will have a lower value. <S> In summary, a FET without excessive BVDSS ratings, with high threshold voltage (but a lower max VGS rating which means thinner GOX) will be a better choice. <A> I just did a quick check of a 2N7000 using two mulitmeters; one measuring ohms from drain to source, and the other measuring voltage, with the gate shorted to the source and positive ohms lead to drain. <S> I got 270 mV and 24 Mohms, which comes way under the spec'd 1 uA figure down near zero volts; closer to 10nA which happens to be the gate leakage spec. <S> I'm playing around with some high impedance circuits (low power) that cause me to worry about drain leakage current. <S> Checks experimentally down in this region are problematic. <S> Ohm meter impedance could cause bad data by changing the circuit bias. <S> Could be my test here <S> suffers this issue since one meter influences the other, though volts should be least intrusive.
The Idss value does change with Vds, but the change is not very significant after Vp (i.e., within the saturation region). For a power MOSET, IDSS will decrease by a factor of approximately 10 for each 200 mV decrease in VGS when the device is biased with VGS lower than VT.
STM32 USB host will not return from reset request Note: I have edited this question from its original content since I was able to find a deeper cause/symptom for the problem. I have re-written it to focus on that instead. I am using a pretty basic configuration for the STM32F405 using the CubeMX configuration system. Something (a clock, a setting.. something) is not configured correctly, and I can't tell how I could have possibly caused it. Though it could be a problem with the PCB I designed, that seems unlikely as the code runs and debugs and the SysTick interrupt is advancing the timing counter just fine. I have tracked part of the problem down to this function /* Reset after a PHY select and set Host mode */USB_CoreReset(USBx) Which times out, presumably because the core never comes out of reset (OTG_FS_GRSTCTL:CSRST == 1 always, after being set in that function. USB_CoreReset() tries to read CSRST==0 200,000 times, and if it fails it returns unsuccessfully). If you would like to generate this file and look at the output (or adapt it to your board), you can take this text and paste it into a file called "USB CSRST Problem.ioc". Note: It will probably require some tweaking for your board since I have some pins assigned to outputs for an LCD display. I have tracked a discussion down that describes similar symptoms , but I can confirm that my code does set OTG_HS_GUSBCFG:PHYSEL properly (confirmed setting bit 6 before the reset is performed), and generally adheres to the recommended startup procedure as outlined in that thread. Clocks configuration: <Q> Since you didnot upload the whole cubemx and generated code for the project <S> i cannot fully try to replicate and find the problem. <S> But i will try to point you into few things <S> that might/mightnot help : <S> 1- make sure that global non maskable interrupts are enabled , inaddition to USB OTG gloabl interrupt. <S> in the configuration window , NVIC tab in cubemx project. <S> 2- make sure you are using the correct mode , (using RTOS or standalone project , becuase the code generator may change things in RTOS mode or using DMA because DMA usually consider the interrupt as a request event). <S> your problem might be somewhere else which is preventing write to this register, something like the lock sequence used for mapping <S> I/O and peripherals. <S> 3- There is 2 things called GINTMSK , a bit in OTG_FS_GAHBCFG register and a a register called : OTG_FS_GINTMSK <S> 4- <S> The correct programming sequence is used : <S> 5- Search in smt32cube directory / projects / stm32xx discovery / applications/ in this folder <S> you will find multiple projects on USB applications depending on your deivce , use these codes as starter and compare the initalization to your code. <A> So, here is the resolution. <S> @ElectronS was correct, sometimes you have to assume you don't know anything for sure. <S> For example, I knew for sure that my 24MHz external oscillator was working fine because the whole core was running from it, according to the code generation configurator. <S> Well, apparently the STM32 will not allow you to select the external oscillator if it is not running. <S> Or something. <S> I don't have a good explanation. <S> Here is the core of the core of the problem: the Abracon ASDMB datasheet. <S> When it says What it means is <S> Which might more accurately be written as \$\overline{Standby}\$... <S> tl;dr the oscillator Enable pin was pulled low. <S> Thanks for all the feedback and pushing me to challenge my established thinking that had me stuck. <A> I think what was happing is that , even though your oscillator is not providing clock , the micocontroller was running at power on from its internal RC oscillator . <S> Heres a qoute from Things to remember when coding stm32 "Don't forget to enable the external or internal oscillator that you need to use for SYSCLK.At <S> power-on for default is enabled the internal RC oscillator HSI. <S> "
There should be also a startup timer that forces the micro into reset until the oscillator is stable and from the PLL to lock ( i used that in dspic processors ) , if clock is not stable switchover from internal RC to external doesnot happen and program gets stuck.
Minimise backlash in DC geared motor I am new here, I actually work on application programming. As a hobby I started to work with arduino and dc motors. Soon I ended up with backlash problems. Right now I am using a rotary encoder which give 4000 counts per rotation. This is with a 12v planetary geared dc motor. I also tried with spur geared motor. I am able to control the motor. But it's not very precise. The difference between counts changes each time. For example, when I try to stop motor at 2000 counts it will stop anywhere between 2004-2016 counts. This problem is due to backlash. I connected the encoder to motor shaft using a timing pulley. What if I attach a disc to motor shaft and stop it using solenoid . Anyone ever tried something like that or any other inexpensive idea. I am from India and I do not get precise products easily here. I can post pictures of setup if needed. I shall give up with DC motor and opt for stepper? <Q> Why don't you use a control loop to integrate out the count error? <S> You want to stop at 2000 and without a controller it stops at maybe 2010. <S> If you had a feedback system it might overshoot up to 2016 <S> but the feedback would bring the output shaft rapidly back to position 2000. <S> Can you live with this type of overshoot? <S> More sophistication can be incorporated so that the motor begins to decelerate as position 2000 is reached. <S> A 3 term controller might be what you need and the basic overshoot could be possibly halved or more. <S> However, if the optical encoder is not on the output shaft then it won't work. <A> Backlash is caused by the gear teeth not fitting tightly together. <S> The result is that the gearbox output shaft will sit in a different position depending on whether it is pulled in one direction or another. <S> If the load always pulls against the direction the gearbox is driving then the gears will stay together and backlash shouldn't be a problem. <S> However I think your problem is not backlash, but inertia . <S> when I try to stop motor at 2000 counts it will stop anywhere between <S> 2004-2016 counts. <S> This suggests that the gearbox is not stopping as soon as you stop the motor. <S> When you remove power from the motor it won't stop immediately because it takes time for the armature and gears to decelerate, and the load may also have inertia which pulls the gearbox output along with it. <S> The only way you can fix this is by stopping the motor before the count gets to where to you want it. <S> For example, if you know it runs on by 10 counts then you can stop at 1990 counts and let it run on to 2000. <S> Getting an exact run-on count can be difficult if the load varies. <S> If you make motor speed proportional to positioning error then it will automatically get slower and slower as it approaches the target position. <A> You can use relay to stop the dc motor fastly, make some circuit that make dc motor shorted afther relay contact changing.
You can reduce this problem by slowing the motor down as it gets close to the target position, then it will take less time to stop and the run-on count variation will be less. If the load may pull in either direction then you might be able to bias it to one side with a spring.
Mounting a LCD display to a PCB I'm working on a project that involves this LCD for a PCB. From the drawing , I can easily tell how to connect the pins. But I am not sure how to tell from the drawing how the display should be mounted. I've heard of other displays that will have little arms or bars you can solder down. Not sure what the appropriate way would be to go for this display. What would be the best way to attach this LCD to a PCB? <Q> From the drawing, I don't see any mounting provision. <S> It could also snap into a suitable molded bracket on a plastic panel. <S> If you want to mount it to a PC board, you will probably have to design some standoff/clamp assembly yourself. <A> It appears that this LCD module is meant to be attached to a PCB with very thin double-sticky tape (the D.S.T. abbreviation in the drawing) - if I am reading the drawing correctly, then it says that the thickness of the DST is only 0.05 mm. <S> Such a tiny gap between the LCD module and the application PCB is possible because the LCD's FPC connection tail is soldered down rather than connectorized. <S> Now the really strange part is that some other LCD modules out there have FPC tails that are meant to go into a ZIF connector (not soldered down like this one), yet they still somehow expect the LCD to sit flat against the application PCB, even though the thinnest FPC connector I could find (Hirose FH33 series) takes up 1.20 mm of vertical space (height above PCB). <S> I still haven't figured out how those are supposed to be mounted - but your LCD and other LCDs with solder-down tails are easy in comparison. <A> In my previous career, I designed custom lcd modules for customers designs. <S> One of the best ways to mount the glass to a pc board is to use a square of double stick tape, preferably with .032" of foam. <S> The foam acts as sort of a shock absorber. <S> If you use quality tape the lcd will never come loose. <S> This will only work if the flex tape is long enough and flexible enough. <S> Most all of the small, custom tn numeric/icon display modules with on board driver ics, use this technique. <S> It works well for small hand held instruments that are likely to get dropped.
I expect that the display is intended to be mounted directly to the front panel of the equipment, either by clamping or by using glue.
VDDCORE & ENVREG connection when it is not used I have 2 PIC18 chips the first one : 1- PIC18F46K80 working with 5V supplied from external regulator, it has a VDDCORE pin. 2- PIC18F65J90 working with 3.3V supplied from external regulator, it has VDDCORE & ENVREG pins. I read the datasheet more than once and I didn't come to a clear point. The question is what should I do with those pins? because i am not using the internal regulator feature. Should I tide them to ground or just let them float, Or what should I do? <Q> It looks like the PIC18F65J90 is a part of the PIC18F85J90 family. <S> In section 23.3 On-Chip Voltage Regulator of the datasheet: <S> All of the PIC18F85J90 family devices power their core digital logic at a nominal 2.5V. <S> For designs that are required to operate at a higher typical voltage, such as 3.3V, all devices in the PIC18F85J90 family incorporate an on-chip regulator that allows the device to run its core logic from VDD. <S> So that says the core digital logic modules need 2.5V. <S> The regulator is controlled by the ENVREG pin. <S> When the regulator is enabled, a low-ESR filter capacitor must be connected to the VDDCORE/VCAP pin <S> (Figure 23-2). <S> This helps to maintain the stability of the regulator. <S> The recommended value for the filter capacitor is provided in Section 26.3 “DC Characteristics: PIC18F84J90 Family (Industrial)”. <S> If ENVREG is tied to VSS, the regulator is disabled. <S> In this case, separate power for the core logic at a nominal 2.5V must be supplied to the device on the VDDCORE/VCAP pin to run the I <S> /O pins at higher voltage levels, typically 3.3V. <S> Alternatively, the VDDCORE/VCAP and VDD pins can be tied together to operate at a lower nominal voltage. <S> Refer to Figure 23-2 for possible configurations. <S> Figure 23-2 shows your three options to supply the digital logic modules with 2.5V. Since you are supplying 3.3V, I would go with the first option and let the internal regulator provide the 2.5V for you. <S> Tie ENVREG to 3.3V and use the appropriate Vcap. <A> This was originally a comment <S> * VDD is the positive supply voltage pin. <S> ENVREG is the On-Chip voltage regulator, that can be supplied with 5V, unless specified otherwise. <A> The microprocessor core runs on 2.5 volts. <S> The I/ <S> O pins and other peripheral logic can be run from a higher voltage (VDD). <S> VDD does not supply power to the core. <S> You must supply your own 2.5 volts, or use the built-in regulator to get it. <S> There is no option to not supply voltage to the processor core. <S> Now the disposition of ENVREG is apparent. <S> If you use the internal regulator, tie it to VDD. <S> If you don't, then tie it to ground.
Tying VDD to the pin enables the regulator, which in turn, provides power to the core from the other VDD pins.
How to check what is the percentage of charge in my lead acid battery? http://www.allbatteries.co.uk/media/pdf/AMP92108_FR.pdf I want to know how much percentage of the battery charge is left without buying any fancy equipment to measure it. <Q> That looks like a lead acid battery with 2 cells. <S> Here's a table of values for some rough voltages to expect (computed using value from here ) <S> 100%: 4.22V75%: 4.15V50%: 4.08V25%: 4.02V0%: <S> 3.96V <S> If you want to go more in depth, I would recommend reading this page , as well as the links inside that page. <S> The short answer is "it's complicated", and specialized equipment is probably the best answer. <A> If you don't want fancy gear and if you are using Flooded lead acid batteries then the Hydrometer could be for you. <S> It measures the specific gravity of the electrolyte. <S> Pure water which would represent a totally discharged possibly beyond redemption cell is represented by a reading of 1000. <S> Sulphuric acid is denser than water. <S> The more concentrated the acid is the higher the hydrometer reading. <S> For example a good cell in a lucas 90AH marine starting battery would read 1260. <S> Precise readings will have some dependence on temperature and battery chemistry. <S> I used this in 1980 after my Lambda DVM blew up <S> and I was never left stranded on my experimental electric bike that was powered by a thirsty truck starter motor. <S> The hydrometer would put holes in clothes that looked like cigarette burns if you weren't really careful. <S> The hydrometer allowed knowledge of impending cell problems which the DVM would not. <S> Maybe the hydrometer was better than the DVM. <S> Maybe acid concentration could be measured directly these days but that means fancy gear. <A> If you have access to the batery acid itself, a refractometer is a very precise instrument for measureing state of charge. <S> Otherwise, you need a multimeter and measure the resting voltage. <A> Although there's a few basics that can give you an idea, there's no flawless and accurate method to determine how healthy a battery is. <S> Voltage at rest is no guarantee. <S> Winny's suggestion of a hygrometer is a good one (it will tell you a fair bit about the condition) but potentially not very practical for battery monitoring. <S> I have two lead-acid batteries of identical capacity; one was damaged by over-discharge and the other is used but in good condition. <S> They both measure a healthy voltage at rest, but the damaged one drops like a stone under load and has less than 5% of its rated capacity available. <S> A simple voltage test would not tell the difference at rest, although it would obviously see the bad one dropping off. <S> If you know you're starting with a good battery then the voltage at rest can give you a basic indication <S> but it's not perfect. <S> Serious systems do all sorts of things like coulomb counting and profiling, but then they also tend to be using quality batteries with well specced behaviour that the system can be calibrated for & rely upon.
Luckily, assuming a relatively healthy battery you can get a rough idea of the charge level by just measuring the open circuit voltage . I measured the capacity using a load tester at C/20 rate, and logging the voltage over time.
Schottky barrier diode as a bridge rectifier? I am building a linear power supply. I need a bridge rectifier and the diodes I currently have can only handle up to a maximum of 1 amp, but I need 4-5 amps. But I have a bunch of YG862C15R Schottky barrier diodes . Can I somehow use them as a bridge rectifier or have I got this all wrong? Since it only has three pins, two for the AC IN and only one for the DC OUT (I guess), won't there be any negative? Like in an IC rectifier which uses four pins. How do these work? <Q> They are not normally used in regulators because they are more expensive and sometimes physically larger. <S> The advantages of a Schottky diode (low forward voltage, fast switching) will not provide any benefit for an ordinary bridge rectifier. <S> The best option for hobby projects is to get a purpose made bridge rectifier that has all of your rectifier diodes integrated in one package. <S> These cost about $2 in small quantities. <S> However, the specific part you have selected should not be used in a 120V bridge rectifier, because it will experience a peak-to-peak reverse voltage of 336V, well above the maximum rated 150V reverse voltage for this part. <S> Don't do it. <S> And make sure you have a GFCI or isolation transformer between mains and your power supply. <A> Silicon Schottky diodes have a much lower voltage drop than the standard silicon types. <S> You can't use them on mains, because they don't make them with high breakdown voltages. <S> In fact, they are difficult to find above 200  <S> V. There is a manufacturing tradeoff between breakdown voltage and forward voltage drop. <S> In your low-voltage application you could use say 40-volt Schottky diodes, and they would waste about half the power of a standard bridge rectifier. <S> I have used three <S> TO220 isolated packages on battery chargers in a previous life. <S> On larger stuff I would use four TO220 packages and four TO247 packages for special occasions. <S> Nowadays I bypass them with MOSFETs, because they are available with low-enough on resistance to beat the Schottky diode when the output voltage is low. <A> Yes, you can use these for a bridge rectifier 'somehow', but you will need three packages. <S> You can use one package for the two diodes from AC to +ve, but you will need to use two separate packages for the two AC to -ve diodes. <S> It doesn't matter much what you do with the spare diode in the -ve packages, leave it unconnected or parallel it with the other diode.
In this application Schottky diodes will work just as well as normal diodes, by arranging them as you normally would a bridge rectifier.
Is there any downside to using a larger than needed smoothing capacitor? I work with low-power DC voltage regulators. I am already aware of the formula to calculate the size of smoothing capacitor(s). This can be an iterative process of testing one size with a scope and then using a larger size or adding more until the scope shows acceptable (very low) levels of ripple and noise. Besides the cost of the capacitors, is there any tradeoff to rounding up (a lot) and just using a very large capacitor(s) rather than trying to calibrate the sizing to "just enough" but not more than that? <Q> As far as caps go, there are two competing requirements:long-term (ripple) and instantaneous (spike). <S> A big electrolytic can give you the former but not the latter. <S> Generally you parallel your large electrolytic with a smaller 0.1uF capable of supplying that instantaneous spike whilst the electrolytic lumbers into action. <S> Or the 0.1uF may be for local decoupling to stabilise that regulator. <S> If the specified capacitor is actually 0.1uF or smaller, then the intention of the capacitor is to supply small amounts of charge very fast. <S> Do not replace this with a bigger electrolytic - that's definitely a case where larger is worse not better. <S> Going past that, you'll have to tell us what kind of regulators you're dealing with. <S> If it's just a basic linear regulator then it doesn't really matter. <S> If you have a switching regulator though, the capacitor will affect the resonant frequency of the switcher, so be very careful there. <A> A larger than minimum smoothing capacitor on the output of a transformer and rectifier will give you lower ripple, which is a plus. <S> It's a small plus however, as even doubling the size of the capacitor will only (roughly) halve the ripple. <S> Anything downstream of a large capacitor will need to have significant Power Supply Rejection Ratio (PSRR) to cope with the ripple. <S> There are cheaper ways of improving this by a factor of two than doubling the size of the Big Filtering Capacitor (BFC). <S> The downside to a larger BFC is that it will draw larger, shorter current pulses from the input transformer and rectifier. <S> This can cause a number of problems, though most are small, or can be mitigated. <S> a) <S> Higher electromagnetic interference generation, due to larger current pulses, and higher currents being switched off in the diodes. <S> b) Slightly hotter diodes and transformer, due to larger RMS current. <S> c) Poorer input power factor. <S> A sniff of inductance somewhere in the supply (AC input, transformer leakage inductance, post transformer or post diode) will reduce the magnitude and extend the length of the rectifier pulses, improving all of the above. <A> Note: my interpretation of the OPs post is we are talking about capacitors on the output of voltage regulators, some other posts seem to assume the asker is talking about capacitors on rectifiers. <S> That means more stress on the regulator during startup and in extreme cases may even cause an overcurrent shutdown of the regulator. <S> It can also cause problems for loads which don't handle undervoltage very well. <S> Having said that I don't think there is any point trying to micromanage the size of such capacitors. <A> From Andy akas comment: If the supply you are using has specific output capacitor requirements , then make sure you follow them. <S> For all these types of regulator linked (LDO), there is usually a minimum capacitance only. <S> (search the datasheet for ESR). <S> If you are using a switch-mode regulator, then the output capacitor (in current mode controllers) determines the output pole and zero . <S> In voltage mode converters, it forms a resonant circuit with the output inductor. <S> In both cases, we must provide loop compensation and that is partly determined by the value of the output capacitor(s). <S> (Note: I am aware that using ceramics on the output of a current mode device requires other techniques to provide an output zero as a ceramic capacitor zero is too high in frequency to be useful). <S> These capacitor(s) must be carefully chosen ; changing these values requires re-assessing the loop compensation components, or it is quite possible loop instability can result. <S> This re-assessment may also reduce the loop bandwidth of the supply, reducing transient performance. <A> Here's another point: many modern converters are protected against shorts or overloads in the output circuit. <S> Such protection is a must for lab PSUs and a nice feature for all PSUs with connectors, since the ability to connect different loads increases the risk of shorts and overloads. <S> Having a big cap on the output reduces the effectiveness of such protection, since more energy is available to do the damage before the protection cuts the power off. <A> On the face of bigger is better for reasons that are well documented elsewhere. <S> If the cap gets really big there will be problems with inrush current .On <S> a small power supply the transformer should keep this down to a reasonable value .When <S> rectifying mains into a cap filter the peak currents in the diodes can be several times the average DC output current .This <S> is well documented elsewhere. <S> This peakiness of diode current causes poor power factor and bad line current <S> THD <S> .If <S> your source impedence is low the bigger cap will make this worse .Generally <S> you can use the bigger cap on a small transformer based system without having to add any other parts. <S> Larger systems can be made to work well by employing a line reactor on the AC or a small choke on the DC .If <S> you are putting a very large smoothing cap on the output of a buck convertor there is a risk of instability which may need a small inductor to mitigate by divorcing the big cap. <A> Larger capacitors also have more parasitics (eg equiv. <S> series resistance and inductance.) <S> This is what "slows them down" so to speak.
The main downside of a bigger capacitor is that the switch on rise time and switch off fall time will be greater. In most cases allowing a generous margin (a factor of 2 or more) over what you think you need is unlikely to be a problem.
Why isn't Ohm's law working for this simple circuit? I have a 70W 12V-500V DC power converter, and something in these pictures is not adding up. I run the 500V output through a high-voltage 332kΩ resistor . Now, using the left multimeter I monitor the current through the circuit, and it runs up around 320mA. Using the right multimeter I first check the voltage on the power supply leads: 511V. (So right away we know something's off -- the supply isn't even getting warm after a few minutes of testing.) Then I use the right multimeter to check the voltage across the resistor: 12V! I verified the resistor's value with both meters. If all of this is true then Ohm's law suggests that either the voltage across the resistor should be .320mA * 332kΩ = 106kV, or else the current through the circuit should be 511V / 332kΩ = 1.5mA. (Of course the right multimeter itself is providing a path for current, but its resistance should be very high. Indeed: when I remove the right multimeter from the circuit the current increases only 2-3mA.) My best guess is that the output of the converter is not very smooth DC, or has some characteristic that is causing these multimeters to produce erroneous values. If so, what characteristic might that be, and how can I adjust for it? BTW, here's a close-up of the DC converter. Maybe the design will be familiar to someone. And in case the wiring isn't clear from the photos, here's how it was connected simulate this circuit – Schematic created using CircuitLab And here's some more information. First, to remove all doubt, here's the resistor being measured after all this , so it does not appear to have fried. And here is the whole shebang. If the ammeter on the DC supply can be trusted, and this power converter doesn't have the ability to produce power out of thin air, then the true circuit current is under 3.6mA. (This is consistent with the fact that I have observed no heat buildup on any component whatsoever.) In which case the question is: Why is the Extech reading ~320mA? If I switch the Extech to µA scale it reads around 3180 (still not right). The other multimeter reads 0A at all scales, which is consistent with the voltage drop seen across the resistor (which implies 36µA true current). Epilogue: I opened the Extech multimeter to find its 250mA fuse blown. Replacing that made it behave normally. Evidently it just has a very confusing failure mode! <Q> Can't see much on your photo. <S> But one thing is certain: Ohm's law is tough, but just. <S> You haven't connected something right. <A> At a guess, your 332k resistor is about a 37 ohm resistor. <S> Your nominal current (500 volts, 70 watts) is about 140 mA, so your real current is about twice this, which seems credible. <S> Start by running your supply with no load and verifying that it will put out 500 volts no load. <S> Then measure the resistance of your resistor. <S> If both of those tests are OK, in the words of Gregory Kornblum, You haven't connected something right and looking at your tangle of wires <S> I'm not surprised. <A> Based on your voltage measurement, it looks like your resistor is about 38 ohms, not 332,000 ohms. <S> The current limit on your ExTech multimeter is 200 mA according to the datasheet . <S> The meter is showing about 60% over that limit. <S> My guess is that you either partially blew a fuse or triggered some internal current limiting circuit, so most of the voltage is being dropped across the current meter. <S> Alternately, if the resistor actually was 336k, it might have burned out and failed short after dissipating 0.75W. <S> You'd probably have noticed a smell if that were the case, though.
The power supply has gone into current limit.
What are good quality of life additions to a prototype PCB? What additions are good to add to a prototype PCB that makes your life easier? I have come across a few nice additions such as: Adding useful information on the silkscreen to designate functional areas/pins/traces. Revision number. LEDs for power and micro activity. Test points at important places in the circuit. Connecting a spare communication channel to a header. Are there others that are helpful? <Q> I'd add large silkscreen rectangle, pref.on both sides - to add various Sharpie marks when necessary. <S> Lots of small bare copper islands to store SMT parts you have just removed but may need later - just solder one pin to the closest island :-). <S> Extra 0.1" jumpers where you may want to clip your current probe. <A> Two pads with holes about 1mm diameter, about 1/2 inch apart, both grounded. <S> Solder a thick wire rail to these, like a hitching post outside a Western saloon, <S> and you'll have a secure place for the crocodile clips on your scope ground <S> leads. <S> If you're going on to a small production run, (50 or so) consider identifying every net that doesn't naturally connect to an external connector, and connecting it to a pin on a ribbon cable header (or a few small 10-way headers). <S> This can be a lower budget option than adding test points for a "bed of nails" fixture. <S> Then you can build a test fixture that attaches to your board with just a few ribbon cables, and allows you to test for things like shorts to power or ground without the expense of a full "bed of nails" fixture. <S> See Majenko's answer to <S> this Q&A <S> for a really neat trick with staggered pins, to make good connection without fitting sockets... <A> This seems overly broad, really; but anyway - extra bypass capacitor pads (you may never populate them, but if you want them, it's nice to have them.) <S> This is actually not uncommon to find in production boards, complete with silkscreened part numbers for the missing capacitors. <S> Most everything else depends a lot on the actual circuit. <S> Pads set up to make a solder bridge or places set up to cut a trace easily might make sense, but which things to do that for will vary, and it's always something of a gamble since you should not need to do either if you get it right, but you may not guess right how you'll get it wrong. <S> It can be a good idea to provide unequivocal revision tracking and things like two-pin device polarity clues (square .vs. <S> rounded pad) <S> right in the copper - depends on the available real estate. <A> This will facilitate you cutting and modding if you find an error. <S> Again to facilitate cutting and modding. <S> Make available liberal points of 0V and Vcc around the board. <S> Sometimes the placement of cap's is enough but sods law dictates that the ONE signal you want doesn't have a convenient GND nearby, or when you need to add a cockroach mod there isn't a nearby power... <S> If you have space OPAMP <S> /COMPARATOR's terminate them correctly so that they do not cause rail issues when they are not in-use <S> You can then easily configure them to any of the std blocks simulate this circuit – <S> Schematic created using CircuitLab
Extra non-tented ground vias next to likely probe points - to solder a ground wire that will then be wrapped around a probe. Surface traces as much as possible. Every IC pin to goto a via (outside device footprint).
Why did my oscilloscope hookup trip my RCD? I was attempting to investigate an SPI interface on a power line meter (typical model that measures Voltage, Frequency, Amps, Watts). I opened the module and found the SPI pins broken out. So I plugged in the meter and the oscilloscope, and connected the oscilliscope probe to the CLK pin, and the probe ground to the GND pin. A second or two after connecting the GND pin, the device IC caps blew off, and the RCD for the property blew and had to be reset. What did I do wrong? How did this blow the RCD? <Q> The oscilloscope's probe ground is connected to the earth (0V). <S> It's likely that the "ground" of the power meter is not really ground. <S> But, without a board schematic, it's difficult to debug. <S> To debug the circuit, a differential voltage probe would be best. <S> Otherwise, the probe ground could be connected to the earth prong in the power meter. <A> GND in the context of an electronic circuit usually reffers to the reference rail against which things in that circuit are measured. <S> GND on external connectors will nearly always either be floating or tied to mains earth <S> but once you get out your screwdriver and start connecting to ports that were only meant to be used for factory program/test/debug or connecting internal modules then all bets are off. <S> If you are building a power meter with no external data connections then it's generally easiest to tie your "circuit ground" to mains live. <S> That way you just need a simple capacitor based transformerless power supply for powering the circuit (I expect thats what thosebig caps are for). <S> A series resistor for measuring current and a resistive divider going off to the neutral for measuring voltage. <S> Even if I did have a data connection I'd be tempted to do it this way and then optoisolate the data connection. <S> Scopes usually tie the ground of their inputs to mains earth. <S> You can get scopes with floating inputs but they are uncommon and expensive. <S> You can also get isolated probes but again expensive. <S> Circuit ground tied to mains live, scope ground to mains earth, the result is a short circuit from mains earth to mains live. <S> BANG. <S> I expect the way this device was debugged during developement was to feed it from a floating output isolating transformer. <S> Once that was done the circuit ground could be connected to the scope without causing a short circuit. <A> Small AC-powered devices can use the live or neutral for GND (DC ground). <S> Your AC wiring probably has the live, neutral and protective earth (PE) wires. <S> Neutral may not be at the same potential as PE (live obviously is not). <S> The RCD compares the current flowing in the live wire with the current flowing in the neutral wire (they should be identical), if they don't match (which means that some current is flowing into PE) <S> the RCD will trip. <S> Your oscilloscope probably has the DC GND connected to PE through its power supply (there are also special fully-isolated scopes available). <S> When you connect scope GND to GND of your meter, you probably connect PE to live or neutral, which trips the RCD. <A> The photo reveals that the device uses a transformerless power supply design - fairly obviously from the lack of a transformer, and additionally the readily recognisable yellow "X2" capacitors plus the group of components on the right-hand side of the top PCB which are representative of those typically used in that type of power supply. <S> If the device doesn't offer any (non-isolated) external electrical connections to its internal circuitry, the "ground" rail of the power supply does not need to be at Earth/Neutral potential and it would appear that this device indeed uses such a "floating" ground rail. <S> As is the norm <S> the oscilloscope input "ground" is connected to Earth and when you attached the "ground" of your oscilloscope probe to the "ground" on the device you were effectively short-circuiting that "floating" rail to Earth/Neutral. <S> Had you monitored your scope after you attached your probe and before you connected your ground <S> lead <S> you may well have noticed what a large voltage was present on the probe. <S> What isn't immediately clear without knowing the specific design of the power supply is why the ICs were damaged. <S> Presumably this is not a case of the circuit "ground" simply being tied to the Active line. <S> It would make an interesting exercise to trace out the power supply circuit and see exactly how your ground connection led to the outcome you experienced. <S> It is a clear and important lesson not to make assumptions about what a "GND" is! <S> That applies in any circumstance, but is all the more important when this type of power supply is in play. <S> While not an exhaustive check, some simple meter checks (continuity check while disconnected, and both AC and DC voltage while powered) are always worthwhile between a known ground and a "suspected" ground. <S> And if you've gone ahead and connected your oscilloscope's probe anyway, it's also worthwhile to check that there's not anything unexpected going on there.
It may be floating and isolated, it may be tied to mains earth, or if you are unlucky it may be tied somewhere else. It's likely that the board's "ground" is actually at the neutral or line voltage, so it would create a circuit between neutral and ground or line and ground, which the RCD detected.
Advantage of using SMD components I know that SMD components are smaller in size compared to their through-hole equivalents. They enable a designer to reduce overall pcb size. Is there any other advantage of using SMD over through-hole components ?By other advantage i mean benifits in parameters like power dissipation, noise immunity, stability etc. <Q> A major advantage of SMD is the ease of machine assembly .The <S> automation of the assembly process gives more uniform results. <S> When it comes to testing PCBs mistakes tend to also be uniform which makes things easier . <S> If all assembly were to be done by machines then it does not matter much from a cost standpoint where the machine is located. <S> This gives the customer the choice of where they want to do thier manufacturing .When <S> you have through hole stuff with lots of flying leads and heatsinks <S> there is a large part of the job that has to still be done by hand .The cost of hand assembly <S> varies greatly from country to country . <A> In addition the small packages allows for tight power design because components can be places next to each other which also decrease trace length and power loops. <S> As far as i understand it this leads to less noise, EMI , and oscillations hence a better product. <S> Although Military applications still use through hole (in some designs) and NON-LEAD-FREE components and process, because they have advantage in resisting vibration and shock in harsh environment. <S> (This is a mechanical advantage over SMD) <A> Using SMD components allow parts to be placed on both sides of the board. <S> With through-hole parts, the pins go through the board (duh) <S> so they take as much room on the bottom side of the board as they do on the top. <S> Sometimes this can be an advantage because this provides a free via for each pin. <S> But the holes for through-hole pins are much bigger than vias. <S> And you normally wouldn't have a via for each pin of an SMD part.
There is advantage for SMD components specially at high frequency and switching because the leads are shorter decreasing parasitic impedance/inductance.
Difference between optoisolator(optocoupler) and solid state relay? I have been looking into relays and optoisolators. I am aware that there are different optoisolators, with some having a transistor on the detector side which allows current to flow in one direction whilst there are others with triacs on the detector side to allow current to flow in both directions: Upon reading up on relays, I found that they work in a similar manner but using mechanical switching using electromagnets to isolate the two sides of the circuit. I would have then expected solid state relays to be a sub category of relays, but when I looked them, by defintion they perform the same function as an optoisolator. What is the difference between an optoisolator and a solid state relay, if any? Which is a sub category of which and what are the differences in terms of speed and applications? <Q> A solid-state relay typically contains an opto-isolator along with some circuitry to switch a large amount of current in response to the small current switched by the opto-isolator. <A> Most common solid state relays use back-to-back series MOSFETs as the power switching element allowing them to deal with AC. <S> To get a good drive voltage to the isolated gate a photo-voltaic cell is used and this is a significant difference - the LED in the coupling produces light and this generates several volts DC in the photo-voltaic cell to strongly activate the channels of the MOSFETs. <S> These types of solid state relay tend to be a bit slow at switching and there can be a significant power dissipation when turning on or off <S> but, solid state relays tend to be used as on/off controls like mechanical switches and are usually not operated with a PWM signal. <A> I would have then expected solid state relays to be a sub category of relays, but when I looked them, by defintion they perform the same function as an optoisolator. <S> Relays are mechanical and until the advent of solid-state electronics they were the best practical way to switch high-power loads on and off. <S> Eventually wear and tear will result in failure of the mechanism or the contacts will wear out - particularly if they spark. <S> Note that relays provide electrical isolation between the control circuit (which operates the coil) and the load (switched by the contacts). <S> This is clearly shown in the schematic symbol. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Another problem with the relay is that switching is asynchronous with the mains. <S> Switching on or interrupting power at mid cycle is the worst case for the contacts and for generating electrical noise. <S> The solid-state relay (SSR) addresses these problems by the optional inclusion of a zero-cross detection circuit so that power is only switched on when voltage is zero <S> and they nearly all will finish off the current half-cycle when the control signal is turned off. <S> With no moving parts the device should never wear out and, as you correctly stated, the control circuit is isolated from the load. <S> What is the difference between an optoisolator and a solid state relay, if any? <S> Which is a sub category of which <S> and what are the differences in terms of speed and applications? <S> Opto-isolators are used for signal isolation between circuits in the mA range. <S> SSRs are used to switch power in the amps range (0.1 to hundreds). <S> One disadvantage with SSRs is that, when on, a little voltage is dropped across them and they dissipate some heat. <S> For more than a couple of amps a heatsink is required.
Opto-isolators are designed to switch a small amount of current.
Assigning x in verilog Assume there exists a 1 bit data output port and a 1 bit dataValid output port for a module. Is it OK to assign 1'dx to the data output when dataValid is assigned 0? Will this create synthesis issues? EDIT: X optimizations seem to happen in both Synopsys DC and Cadence RC. Consider this code (a and b are 1-bit inputs and c is a 1-bit output). always_combif ( !a && !b ) c= 0;else if ( a && !b ) c = 1;else if ( !a && b ) c = 1;else c = 1'dx; // Don't care about this value. gives an OR gate which is an optimized solution as opposed to an XOR which would have been inferred if c were set as 1'd0. This is a simple example but it seems to prove that synthesis tools do perform X optimization. Considering nasty X propagation bugs and verification troubles, are X assignments worth the saved area? <Q> Such an assignment is useful during simulation, to make the waveform plots a little clearer. <S> Most synthesis tools will simply ignore any assignment to x . <S> YMMV. <S> Assigning z to an external pin can be used to denote a tristate driver, but most FPGAs have limited, if any, support for internal tristate buses. <A> This will make life slightly easier in the long run. <S> If you want to optimise don't care logic, do that explicitly (and pick the optimum value for the unused state). <S> That way, you get consistent behaviour more of the time (hopefully always) <A> The tool support for ' <S> x as don't care seems to be pretty good. <S> DC supports it, so does Intel FPGA (Altera) and Xilinx tools. <S> I think it is generally a good idea. <S> Typically, and for your example, 'x propagation in simulation should not happen, because other logic should not care what the value of c is under the conditions that write ' <S> x . <S> (Otherwise it isn't "don't care", is it?) <S> In fact, using ' <S> x when you believe the value doesn't matter can help identify bugs. <S> However, there are cases where it can cause trouble. <S> For example, serial transceiver protocols often use self-synchronizing scramblers, which would break down in simulation if some of the bits in the input data is ' <S> x , even if you don't care what those bits end up as in the receiver, even if it works fine in hardware. <A> There is nothing as X for synthesis. <S> It is mainly used for simulation to catch any data line related issues.
Never assign an X in a reachable code-path, only use X for propagating simulation unknowns.
Can I somehow use one PWM output to set various LEDs to different brightness? I would like to build a project in which 15 groups of LEDs are set to a brightness level. I have an Arduino with digital PWM outputs (which can mimic a range of voltages.) How can I do this? One idea: connect each LED to a capacitor and use a transistor array to charge these capacitors with a voltage which will produce a controllable brightness in the LEDs? I was thinking that I could charge each capacitor with a different voltage, then re-charge them every 10th of a second of so to maintain the set level of brightness. If so: Will I need to use a small capacitor to turn the initial PWM output into a voltage? How big will the individual LED capacitors need to be to maintain illumination? How often will the capacitors need to be charged? How will I need to use resistance in the circuit? simulate this circuit – Schematic created using CircuitLab <Q> This is not exactly what you asked for, but if you want to control multiple LEDs with a single signal , I would look into WS2812 (and similar) family of LEDs and LED drivers. <S> They only need one wire <S> signal(plus supply rails) to control a very large number of LEDs (the more LEDs, lower the "framerate" you get). <S> Any "diy" implementation of a single PWM to drive multiple LEDs with independent brightnesses would require much more than a "transistor array" and capacitors. <S> (although, yes, an IC is an array of transistors, but what I mean is that doing it from scratch might not be viable solution). <S> If you are willing to use more than one signal , you can look into ShiftPWM <S> (based on 595 IC's), or into i2c based port expanders, such as PCA9685 . <S> TLC5940 might also be useful. <A> Yes, it is possible to connect each LED to a capacitor. <S> But we don't understand why you think you need any capacitor to implement brightness level? <S> LEDs are typically directly driven and use the human persistence of vision to integrate the brightness. <S> That allows for faster control of brightness. <A> Will I need to use a small capacitor to turn the initial PWM output into a voltage? <S> How big will the individual LED capacitors need to be to maintain illumination? <S> How often will the capacitors need to be charged? <S> You don't need any capacitors at all. <S> So you don't need to charge any capacitors. <S> How will I need to use resistance in the circuit? <S> You need resistance in your circuit to limit the current through the LEDs. <S> Else they will self-destruct. <S> We typically connect the positive end of the LED (or string of LEDs) to a positive power supply, and then use a transistor between the negative side of the LED(s) and circuit ground. <S> Then you can drive the transistor from any kind of logic-level output without having to worry about how much current the LEDs require.
And use a PWM to control the power going into the capacitor/LED. NO, you don't need any ANY size capacitor to control the brightness of LEDs.
Is the wavelength of the light emitted from an LED at its turn-on voltage greater than or less than the peak wavelength of the LED? Ok, so I know that the intensity of the light given by an LED is the minimum it can be at the turn-on voltage, however I don't know if the wavelength of the light emitted from the LED at the turn-on voltage will be below the peak wavelength (the wavelength at which the intensity of the light of the LED is at its maximum) or above it. My reasoning comes from the picture below: The picture shows that for a specific colour of LED, for any intensity instead of the peak intensity there can be two wavelengths associated with it. For instance, for the blue LED, there will two values for wavelength for when the intensity is 20%. So, my question is: Is the wavelength of the light emitted from an LED at its turn-on voltage greater than or less than the peak wavelength of the LED? <Q> The wavelength distribution will look like the graph, pretty much regardless of current at a given temperature. <S> LEDs are not purely monochromatic. <S> However, as the die heats (and it will tend to heat more at higher current) <S> the center of the spectrum will shift toward the red for all LEDs (longer wavelength). <S> From this OSRAM document , you can see a typical change of about +70pm <S> /K die temperature. <A> First off, as pipe said, you are misinterpreting the graph. <S> The graph merely shows the composition of the the output spectra when operating in a normal regime, not as a function of relative output power. <S> However, to answer this <S> : Is the wavelength of the light emitted from an LED at its turn-on voltage greater than or less than the peak wavelength of the LED? <S> When an LED is "on" (operating in a normal regime, when the forward voltage is greater than the turn-on voltage and the forward current is less than the maximum), it emits a very narrow spectrum of light, centered around a wavelength that is determined by manufacture and temperature (and therefore, to a small extent, forward current). <S> By cooling an LED, the wavelength decreases. <S> This link shows an LED going from orange to yellow when cooled in liquid nitrogen. <S> Likewise, heating an LED increases the wavelength. <S> Consequently, when you apply a large current to an LED, it heats up and the wavelength increases. <S> When operating the LED at the turn-on voltage and a very small amount of current is passing through it, the LED is dissipating less energy than "normal" operating, so the temperature will be somewhat lower than usual. <A> LEDs are not monochromatic, meaning that their output emission is comprised of multiple different wavelengths of light. <S> Your diagram is just showing the normalized amplitudes of the different wavelengths it produces. <S> As a comparison example, consider an incandescent lightbulb. <S> White light as we see it is comprised of light of all visible wavelengths. <S> A 3000K bulb looks orange because the most powerful emissions that we can see are those on the yellow-red side of the visible spectrum. <S> Note how most power is converted into infrared energy, just radiating heat. <S> Very large range of wavelengths. <S> On the other hand, a laser diode can be classified as monochromatic. <S> Notice how narrow the range of emitted light is (0.5-1 nm) in comparison to the incandescent bulb. <S> It's emission contains a much smaller range of wavelengths. <S> Addition: <S> In the middle of the bulb and laser emission bandwidth are LEDs - a small enough wavelength range (about one hundred nm) for them to be considered a specific color but not small enough to be considered monochromatic. <S> Your diagram appears to show the spectra of multiple different LED diodes, here is a diagram showing the spectra of three different LEDs - "common blue LED, a yellow-green LED and a high brightness red LED from the bottom of a Microsoft optical mouse" Image, Description Source and more info
So it's possible that the wavelength is a teeny, tiny bit lower than typical, but realistically, it's exactly the same as when it's operating normally.
How to know the part number that I want? I am new to electronics. But I got to know how to make circuit and make it work. Problem I am facing, I need to know the part number based on my requirement. Suppose if I want to buy an AND gate which works at 9 Volts, then what is the part number for it? How can I find the part numbers that I want? Can I find it on the internet? It might be silly question for many of you... unfortunately it is bothering me a lot how to know the part number... <Q> You take your requirements, your specs, and enter them into a parameter search on an electronic part distributor like Digikey or Arrow or any local one. <S> You can search by voltage, resistance, wattage, and any number of different requirements. <S> For example, in the Logic Gate (Single Function) category for Digikey: You can filter by Manufacturer, Packaging, Series, Logic Type, How many gates in a single IC, how many inputs, if it's a Schmitt Trigger, Voltage range, etc. <S> You can filter by many different specs that suit your needs. <S> At that point, it narrows it down, and you can pick one based on price or other features. <S> They also tend to link to datasheets so you can confirm it works as you need. <A> They will run on a wide variety of voltages, they are inexpensive (when you blow them up!) and readily available. <S> 4000 series AND gates include... 4073 <S> Triple 3-input AND gate 4081 <S> Quad 2-input AND gate <S> 4082 <S> Dual 4-input <S> AND gate Remember that an AND gate is not the same as the more-common "NAND" gate. <A> You first need to pick a logic family based on your supply voltage of 9v. <S> Here are the maximum recommended supply voltages for various families (the absolute maximums may be a little higher, don't go there) <S> : <S> 7400 <S> TTL, <S> 74LS00 TTL, 74HC00 CMOS (TTL compatible): 5v74HC00 CMOS: 6vCD4000B, HEF4000B, <S> MC14000B: <S> 15vTC4000B: <S> 18v <S> So this means you will want to go with one of the 4000 series. <S> For logic chips, a good place to start looking is Wikipedia's list of 4000 series ICs. <S> The equivalent for 7400 series is here . <S> According to the table, a quad 2-input AND gate is part # 4081, as shown here: From the table of supply voltages, plus the table of gate types, you would be looking either a HEF4081B or MC14081B (which both work with supply voltages exceeding your requirements). <S> So go to a site like Digi-Key or Mouser and type in that part number. <S> I find it useful to have all the different types in one table. <S> Parametric searches can be useful too, but Digi-Key's search engine isn't perfect. <S> If you type in "cmos and gate" (without the quotes), which seems like a reasonable search, and then on the next page select Logic Series = 4000B and Logic Type = <S> AND Gate, it will say "0 Remaining". <S> Then what do you do?
I use this approach -- looking in the table of logic gates on Wikipedia first, instead of immediately going to the parametric searches because sometimes I like to just browse and see what parts are available. Knowing only the supply voltage (9V), and not knowing any of the other factors in your circuit, a best guess would be that you should be looking at "4000 series" CMOS gates.
Anything special to consider when running a 12v "Bus" throughout my house? I've recently begun playing a lot with tiny programmable computers/controllers (like the Raspberry Pi and the Arduino), and I'm planning to distribute several throughout my house as various sensors. So far as I have been tinkering with them I've been using a wall-wart transformer with a USB plug to provide them with 5v (like the one I use to charge my cellphone every night). The problem with doing this throughout my house is that I do not have an AC plug wherever I would need a device. I've been thinking about running a 12v "bus" throughout my attic, which would allow me to branch off power for devices wherever I need it. This way I would just need a 5v regulator to pair with each of the devices. (The only reason I would run 12v and not straight 5v is because I already have a large 15-amp 12v switching power supply which I could use) Is there anything special I should consider with this solution? I feel much safer running 12v around in my attic than I would running my own mains power lines. Seems like a no-brainer on the surface, but I may be overlooking something. <Q> A few things come to mind: <S> You need to protect the bus from overcurrent. <S> This way you can be sure the maximum current won't get away from you. <S> This leads to: <S> You'll need to use beefy wire. <S> In the USA, 15A circuits are wired with #14AWG, minimum. <S> If you want to use thinner wire, you'll have to fuse each leg appropriately. <S> Although it would work out technically, it would cause major confusion and ambiguity. <S> You don't want anyone expecting 12VDC and getting Mains voltage (now or in the future). <S> At 12V, the current draw can quickly add up. <S> Keep this in mind as you add devices. <S> You may want to swap in a 24VDC power supply in the future. <S> It is a common industrial standard, gives you twice the power over the same wires, and still falls into the "low-voltage" category. <S> Adding to Point #4: If you choose local 5V converters that accept a range of input voltages (including 12V and 24V, of course), then you won't have to change anything if you bump up the supply voltage. <A> On the face of it it should be OK but the reason we use 120 / 230 V is because the current is so much lower. <S> At 12 V your currents will be 10 or 20 times higher and your cable size will be correspondingly higher to avoid high voltage drops. <S> Your 15 A, 12 V PSU is capable of delivering 15 x 12 = 180 W into a partial short circuit. <S> This is an obvious fire hazard <S> so good wiring practice is a minimum requirement and, maybe, using a star topology with current limiting on each leg would provide some additional safety. <A> Things to consider. <S> How will you convert from 12V to 5V? <S> Linear regulators will get very hot and waste a lot of power if they are asked to deliver nontrivial current. <S> So you will want to use some kind of switching converter for more power hungry devices (A pi is a LOT more power hungry than a simple microcontroller). <S> What will you do about overcurrent protection? <S> lower voltages mean lower electric shock risk but low voltage high current supplies can be a fire risk. <A> Any cable must be fused to avoid fire hazard. <S> Consider using 48v. <S> That's the highest voltage considered a 'safe' low voltage by most authorities. <S> It eases the cross-section requirements on cable compared to 12v. <S> As it's found in telephone exchange cabinets and pro-audio desks, switching regulators to get down from that to any other voltage are readily available. <A> It is dangerous and illegal.
How thick will the wires need to be to avoid unacceptable voltage drop (which wastes power, can also cause startup problems with switched mode converters and can cause ground potential differences when can be a problem if you have any non-isolated communications links between the devices). Even if your power supply has its own built-in protection, I would use an additional fuse (or circuit breaker) at the output of the power supply. If you do use #14AWG wire (or your local equivalent), don't use the typical cable used for household AC! Do not run your 12v in the same conduit as any other lines (120, 240 or phone lines).
How to connect sensors to PCB in bulk? While it is easy to bulk produce PCB's with sensors attached directly on the PCB how do you bulk produce PCB's that area attached with a length of wire (up to 1.5m) to sensors? One solution would be to have PCB headers on the PCB and the wire but I cannot find anywhere that sells wire in this form. We could build it ourselves but when scaling to any significant number this becomes infeasible. You can see how we solved the problem in our prototype in the photo below. How would you solve this problem? <Q> The answer more-or-less hinges upon what a "significant number" of devices is. <S> This is something to work out before getting the boards made because it will likely drive your layout and part selection based on the manufacturer's or vendor's capabilities. <S> The sensor you mentioned, the DS18B20, is a discrete part and I would be surprised if Maxim were willing to sell them pre-attached to a harness/connector, but it's not something I'v tried with them previously. <S> That said, making a simple interface PCB that is not much larger than the size of one of your sensors plus the connector of interest would be quite easy and at the scale you're talking about not much more expensive. <S> If the sensor is something you've designed yourself, the quantity of boards to be manufactured is too low for the manufacturer/vendor to assemble a custom harness (or they just can't/won't), then your best bet is sticking to a connector schema that's already readily available in required length and conductor number/thickness such as CAT-5 cables, EIA-485, etc., so that you don't have to assemble them yourself. <S> Interfacing this cable to your sensor module is probably going to increase part cost and complexity, but if you don't have the manufacturing infrastructure to assemble the boards as you've shown in your prototype, then it may be the best and only option. <S> The sensor you're using is a common one among hobbyists and as such, there are some good examples of cable interfaces to be found such as this one . <S> The author of that page cites standard 4-wire flat telephone cable and shielded 2-wire microphone cable as potential options (in addition to CAT-5). <S> Personally, I would avoid the microphone cable because although the shield is nice for signal integrity, microphone cables typically come with bulky pre-attached connectors because of their intended application. <S> If none of these are sufficient and you decide you want a custom cable for your application, then st2000 does bring up the option of working with a custom cable manufacturer, of which a quick google search turns up quite a few. <A> CAT-5 CAT-6 Ethernet cables and headers are easily found maybe you can consider using them if sufficient. <A> Some sensor OEMs support their products by providing cables. <S> But often this work is farmed out to cable houses which produce custom length cables with custom connectors for the specific product. <A> Often the same assembly house that you would use for your boards will have the equipment to produce a similar quantity of cables. <S> If not there are specialist harness and cable makers who can assemble cables including connectors, shielding, strain reliefs and even provide thermoplastic elastomer overmolding using vertical injection machines. <S> 1,000 is a reasonable quantity for outsourcing, but you can't expect to get a really low price and good quality at that relatively low quantity, so you may have to pick one of the two (quality or price).
If the sensors you're using are commercially-available and you truly are making a significant number of devices, then the sensor manufacturer or vendor is likely willing to discuss selling you the sensor modules with pre-attached wiring harnesses and connectors to your specification (likely at an increased cost.)
Variation of voltage gain with frequency For a transistor amplifier, the voltage gain (a) remains constant for all frequencies. (b) is high at high and low frequencies and constant in the middle frequency range. (c) is low at high and low frequencies and constant at mid frequencies. (d) None of the above. My attempt: probably naive, since there is more change in voltage at higher frequency it should be option (b) but this is wrong and the correct one is ____. ( I know the answer but not reason) Any explanation at the level of high school(12th class) would be helpful. Edit: after comments , the only amplifier circuit diagram discussed is given below <Q> Depends of the schematic, for just a transistor it lowers with frequency, but in tipical amplifier it is c). <S> At lower frequencies if coupled with capacitor gain is low because of reactance of that capacitor. <S> At high frequencies it is lower because of transistor gain. <S> If it is DC coupled, than have low gain only on high frequencies. <S> http://elearning.vtu.ac.in/P9/notes/06ES32/Unit6-MSS.pdf <S> EDIT: <S> after your update, definitely it is answer C. <S> At lower frequencies you have losses in capacitor before Rb (at frequency 0, it have infinite reactance). <S> You can see input section: It consist from capacitor, resistor and base-emiter diode (you can think of it as a resistor in this case). <S> Resistance (reactance) of capacitor is $$ Xc = \frac{1}{2\pi f <S> C}$$ So resistance of capacitor changes (lowers) with frequency. <S> At low frequencies it is very high, so input signal is divided between it, resistor and base junction and only small part of input signal is "seen" at base junction which is then amplified. <S> Because of that you will have lower output signal at low frequencies. <S> At higher frequencies, input capacitor influence is negligible, so circuit amplification depends only on transistor which have lower gain at high frequencies. <S> In the middle reactance of the capacitor is small and also gain of transistor is high, so your gain would be high. <S> EDIT2: <S> You also have capacitor on output which will also lower gain of low frequencies (similar as input capacitor because of high resistance (reactance)). <A> The answer is highly dependent on what exactly is meant by "transistor amplifier". <S> The question is either ambiguous, or it assumes context from something that was discussed in class that we don't know about. <S> Transistor amplifiers can be designed for all kinds of frequency responses. <S> That said, all transistors stop working like transistors above some frequency. <S> Therefore, the gain of a transistor amplifier will go down (assuming it was above 1 in the first place) at high frequencies. <S> However, this is probably not what your teacher is looking for. <S> Again, this is either a bad question or it assumes context within your particular class. <A> Me too in class 12. <S> So, I would like to answer it. <S> At lower frequencies: As you know, the formula for Capacitive reactance since unit 4. <S> $$\mathrm{X_C = <S> \frac{1}{2\pi f C}}$$$$\mathrm{X_C <S> \propto \frac{1}{f}}$$ <S> As the frequency (of the input voltage signal) would be lower, the capacitor in the input circuit would offer high reactance. <S> That is, there will be some kind of drop of voltage (AC) across the capacitor. <S> We know from the circuit diagram and from our knowledge of transistor amplifier, $$\mathrm{V_{BB} + v_i(input) <S> = <S> I_BR_B <S> + <S> V_{BE} + \Delta I_B(R_B + r_i(input \ resistance))}$$ <S> As I said, there will be a voltage drop across the input capacitor, so input voltage would decrease. <S> As a result, the following quantities decrease: $$\mathrm{(V_{BB}+v_i), \ <S> I_B(Base \ current), \ I_C(Collector <S> \ current) <S> \ as \ <S> I_B <S> \propto I_C(I_C <S> = \beta <S> \ <S> I_B)\ <S> in \ <S> the \ active\ state \ of \ transistor.}$$ <S> Hence, $$\mathrm{\Delta <S> V_{CE} = V_{o}(output \ voltage) <S> \ decreases.}$$ <S> As for same input voltage, we are getting less output voltage, so the voltage gain decreases. <S> See the following figure: <S> As we all know, the current gain, $$\mathrm{A_v = <S> \frac{-\beta_{ac} <S> \ <S> R_L}{r}; \ r = r_i + R_B; \ R_L = output\ resistance}$$ <S> Thus, if the beta value decreases, the voltage gain decreases. <S> So, overall, we will get a graph like this (Voltage gain versus frequency):
At higher frequencies: At higher frequencies, the current amplification factor of the transistor decreases(It is the nature of the transistor, it do so at higher frequencies).The performance and its capability decreases as the frequency increases.
Arduino powering an external circuit with higher voltage First of all, I'm sorry for the newbie question. I'm a software engineer and only a hobbyist in electronics. I'm trying to power up an external circuit using an Arduino digital pin. The external circuit uses a 12V power source and drains about 4mA.I have successfully done this using a transistor. According to the specs , the Arduino digital pins can supply up to 20mA with no problem. Therefore, I was wondering if it is possible to power up this circuit directly from the pin, without using the transistor, nor the 12V external source. The questions are: 1) Is it possible? How can I accomplish this? 2) What about the higher voltage? I think the circuit is designed to work with 12V and if I supply a lower voltage it should drain less current (according to Ohm's law) and the current wouldn't be enough. Does it makes sense? 3) If (2) makes sense. Is there any trick to do to achieve this needed higher voltage from the pin? <Q> You are seeing this in wrong way,,1)if a device is rated at 12V (say a relay) you can use it with various voltages defined by the manufacturer if you refer the "Datasheet" of the product it will tell you what is the minimum voltage (say 9V),Typical voltage(say 12V) and Maximum voltage(say 14V) that the device should be operated. <S> 2)So <S> you even if u connect a 12V device to a 5V battery with 10A it simply wont power up <S> 3)Ohm says V=IR is only applicable to simple linear circuits if you observe this <S> carefully there is an R so if u supply a device lower voltage it necessarily wont drain less current as it is purely decided by 'R'. <S> Higher the R lesser the current it draws , <S> Less the R more current it <S> draws,,,which is in General cases. <S> 4)You cannot get higher voltage from Arduino DIOP until u use a BuckUp converter either as the loss of current or bandwidth,there is always a trade off Hope this helps... <A> Powering a device directly from Arduino digital pin is generally a bad idea. <S> Even if the device is rated for 4ma, it may have some power consumption peaks during which it drows enough to damage Arduino pin. <S> Power the converter from Arduino 5V pin and you will be fine. <A> Basically, no. <S> It can be legitimate to power devices directly from a processor pin, and I have done this. <S> However, the device power voltage needs to be the processor output voltage or less for this to make sense. <S> If the processor output is 5 V, for example, then you can power a low-current device from a pin if that device can use 5 V power. <S> If it uses 3.3 V, for example, then you can have the processor pin power a linear regulator, which then powers the device. <S> While it is theoretically possible to step up the voltage from a processor pin, this isn't really practical. <S> You would have very little power available at the higher voltage. <S> If you are going to go thru all this trouble, it would be better to have the processor output enable a switching power supply that makes the higher voltage from the same power that is running the processor, not coming thru the processor out the pin. <S> In any case, the first thing you have to determine is how much power the device requires. <S> You say the processor output can deliver 20 mA, which is probably not at the full processor voltage. <S> Let's say that's guaranteed to be at least 4 V at 20 mA. <S> That means the maximum power out of the pin is ( <S> 4 V)(20 mA) = 80 mW. <S> No matter how you convert that, you can't make more power. <S> Let's say you did convert this to 12 V, and that converter is 60% efficient. <S> That means only 48 mW are available at 12 V. <S> The current capability of the 12 V supply would then be (48 mW)/(12 V) = <S> 4 mA.
The correct solution would be to use boost DC/DC converter, preferably with some enable input to turn it on/off with the Arduino digital pin.
Controlling 5V Pushbutton switch fails with Triac I have an external consumer product what I try to automate by connecting my Arduino to its push buttons. This device uses internally 5V, I can get this voltage on the switch buttons as well. My goal would be to automate this without relays to keep my circuit small and quiet. I have tried 2 methods so far: http://www.instructables.com/id/Small-Triac-Switch/ -Connecting the buttons to the triac -Connecting the buttons to the MOC directly The results were the same. For the first time the switch works (device turns on) but any other attempts are ignored. I would like to understand why and how to modify my circuit to work possibly with the components I have on hand (TIC206 or MOC3041). I got the triac in high power switching in mind because I didnt know what the device uses internally until I disassembled it but I would say the MOC should be good enough for switching 5V. If I use a relay or just sort the wires by hand that works 100%. <Q> You mention "TRIAC". <S> A TRIAC is a double thyristor and a thyristor is a switching transistor type device that you can switch ON by providing a current through it's gate <S> but you can only switch it OFF if you interrupt the current that it is switching ! <S> I think you are switching a DC signal ( <S> the 5 V you measure) <S> so even if you turn the gate signal off (like you do) it will stay ON no matter what. <S> TRIAC switches are for switching AC currents, not DC ! <S> I suggest you get an optocoupler with a phototransistor, not one with a TRIAC / thyristor ! <A> A Triac with Zero-Cross detection works by turning an AC signal on or off at the Zero Crossing point. <S> A DC signal does not have a Zero-Cross point. <S> You can use a dc optocoupler (4N35 is commonly used), or if you don't care or need isolation, a simple transistor. <S> simulate this circuit – <S> Schematic created using CircuitLab R1 is a simple pull-down to prevent a floating base from turning the transistor on. <A> The opto-triacs are intended for switching alternating current and are a poor choice for this application. <S> The two most likely swithing arrangements of your "external consumer product" are shown in Figure 1 and 2. <S> simulate this circuit – <S> Schematic created using CircuitLab Figures 1, 2 and 3. <S> A transistor type opto-isolator will work for either pull-up or pull-down circuit. <S> Connect the opto-transistor collector to the positive side of the button. <S> Connect the opto-transistor emitter to the negative side of the button. <S> Connect the opto-LED as shown to your Arduino.
You need to use a transistor type opto-isolator.
What's the purpose of traces that are later punched out? I've found this odd feature on an FPC in a Camera than handles button and switch input. You can see there are traces that look like they were once connected and later punched out to be cut. The one on the left once short-cut a button, which would have the same effect as if the button was pressed all the time. Here you can see the other components involved. The Button on the right is a two-stage button, the other trace leads to the Anode (yes, Anode, not Cathode) of an LED. I've never seen something like that, what were those traces used for before they were punched out? Are those commonly used for testing parts of a circuit? Could they be a manufacturing or layout error that had to be fixed later on? <Q> Happens quite often that an FPC features special traces and tricks to be able to test it through its own edge contacts, as pogo-probing a flex PCB is a nightmare in and of itself. <S> I have just only seen the holes drilled/lasered/punched out tactic twice before, more commonly they use special traces that are then discarded in the end-use. <S> Causing all sorts of other confusion. <S> That said, of course for a high-end camera they'd set up the whole shebang to just be able to reliably probe an entire FPC panel in the normal way, since it'd be worth the initial cost. <S> EDIT: <S> This is even more evident, by the way <S> (forgot to mention initially), by the fact that all these connections happen at the "trace end". <S> The Point where the switch trace via's away. <S> Now, that may be a coincidence at first sight, but if you look at the ground trace going down, that trace next to it could have been connected elsewhere as well. <S> This "design effort" to loop all the way round is maybe partly to save on number of holes, but most likely also a guarantee that the entire switch-trace can be tested. <S> Further I think, looking at it more closely, it was to test the flex before the rigid parts were laminated onto it, since you can see the holes are "stoppered off" by the rigid parts behind them. <A> Another possibility is to disable features when assembling a "lesser" model. <S> So you can use the same PC or flex circuit for a range of models and decide which options to implement during manufacture. <A> Two reasons come to mind: <S> Pre-assembly testing <S> (but I can't think of any way this would be useful). <S> Tie-down for optional switches. <S> All the breaks connect the relevant traces to GND. <S> Normally a pull-up would eliminate the risk of leaving a CMOS input floating <S> so maybe the switch status is read at power-on reset and any that are low are considered to be omitted? <S> i.e. It allows a common firmware to detect which hardware it's running on.
So this is simply to test the fabrication of the FPC itself, before it is put through the expensive process of trying to put relatively large switches onto a flexible substrate. If the switch is added the link can be punched out. Actually, if you look closely you see that many of these holes connect switch channels with their common signal (ground or supply).
effect, if any, of input voltage on RPM of DC motor I bought a home coffee roaster which is designed to sit on a gas cooktop. It has a small geared motor which turns the drum, which sits above the flame. The target rotational speed of the shaft turning the drum should be about 60RPM . The motor has these markings: output: 6W voltage: 24V speed: 2950RPM current: 0.5A The (overseas) seller shipped a 220V AC 12V DC 850mA adapter but I have 120V AC. So I'm looking to replace the adapter with one that can take 120V input and supply the needed amps. But does input voltage have an effect on RPM? I'm wondering why the seller provided a 12V adapter when the motor says 24V. Is the seller maybe relying upon the 12V DC adapter to reduce the RPM? Should I be searching for a 12V DC (as supplied by the seller) or a 24V DC adapter? <Q> But does input voltage have an effect on RPM? <S> Yes. <S> I'm wondering why the seller provided a 12V adapter when the motor says 24V. Is the seller maybe relying upon the 12V DC adapter to reduce the RPM? <S> Possibly to reduce the RPM but more likely to limit the current should the grinder stall. <S> The motor is rated at 6 W. Power, voltage and current are related by \$ P = <S> VI \$ ( <S> where I is current). <S> At 24 V and 6 W we can calculate maximum continuous current, \$ I = \frac {P}{V} = \frac {6}{24} = <S> 0.25~A\$. <S> Current will increase with increasing load reaching a maximum at stall. <S> You could estimate what the stall current will be by measuring the motor coil resistance and calculating the stall current from Ohm's Law, \$ V = IR \$, as follows: <S> \$ I_{STALL} = \frac {V}{R} = \frac {24}{R <S> } \$. <S> I suspect that you'll find that it's more than 0.25 A. By reducing the voltage to 12 V <S> the stall current will be reduced by half but since both V and I have been halved the power dissipated during a stall will be one quarter of the 24 V value. <S> \$ I_{STALL} = \frac {V}{R} = \frac { <S> 12}{R} \$ in this case. <S> Should I be searching for a 12V DC (as supplied by the seller) or a 24V DC adapter? <S> Does it turn the coffee? <S> Try it out with a 12 V, 0.5 or 1 A, wall-wart PSU and see if you can make some measurements. <S> Report back! <A> The answer to your question is: YES. <S> A lower voltage will make a standard two-wire, run of the mill DC motor turn slower. <S> But even more simply: If you buy something with an adapter supplied which outputs 12V, then any replacement should be outputting 12V for it to function properly. <S> Aside from that, a supply that only takes 220VAC is very old-fashioned and you probably want to replace it with a nice, light-weight, hopefully more energy efficient switching adapter any-way. <S> But that, of course is up to the user and the use-case. <A> You or a friend is likely to have a shoe-box full of adapters that will provide 12 volts DC, from broken or worn out gadgets. <S> I get a lot of them from WiFi routers. <S> The shape of the connectors is somewhat standardized too. <S> You need an adapter that outputs at least 850ma, and make sure the plug fits and the polarity is <S> the same, usually, the tip is <S> + <S> and the shell is -. <S> A simple motor is unlikely to fry itself if polarity is reversed momentarily, but it'll shorten the motor life.
A DC motor speed will be proportional to voltage (for a given load resistance).
Given only a gerber file how do I automatically find out the number of pads that are there on the PCB I am building a website right now , and am trying to automatically calculate the number of PCB pads that are there given only a gerber file. One way is to manually review this. Is there any other way to determine how many pads a gerber file has? From a BOM one can easily find it. However, from the gerber is there anyway to find it? <Q> A Gerber artwork is mostly rendered by three "D-code" commands. <S> D01 means move with exposure on. <S> D02 means move with exposure off. <S> D03 means flash. <S> Open a Gerber file with a text editor and you can see the D-code commands at the end of the lines with the coordinates in front. <S> Typically, almost all the pads should be flashes on the soldermask layers. <S> So counting the flashes on the soldermask layers would give you an estimate. <S> But I don't think there is a sure way of knowing what is a pad from the Gerber file alone. <A> Pretty much all pads are not covered by the soldermask - leaving the pad exposed for soldering. <S> There is generally one soldermask layer (top) for single layer boards and two soldermask layers (top & bottom) for single layer boards. <S> The gerber file(s) for these layers indicate the regions where soldermask should not be . <S> So count the number of regions there are in the top & bottom soldermask layers. <S> This would give you the number of SMD pads + through hole pads (+ untented vias ). <S> If you are only interested in SMD pads, you could subtract any region that has a hole within that region. <S> Untented vias may inflate the number pads you detect. <S> You can rectify this by ignoring regions below a certain size. <S> Eg 60 mil^2 <A> I don't know if GC_Preview has a batch mode or API <S> so it can be done automatically. <S> Have alook on their website: GC_Preview
A gerber viewer such as GC_Preview can tell you how many pads there are on each layer.
Why does this optocoupler circuit work without getting destroyed? This circuit is being used for mains detection. Input is being fed directly from 220V 50Hz mains and output goes to Arduino which is running on 3.3V.Theoretically the optocoupler LED should burn out during reverse polarity of 220VAC in the circuit given below: Here is the voltage graph which appears across the LED of opto-coupler: It shows a peak reverse voltage of 50V only instead of expected 220V (at least that's what I expected). However 50V alone should be able to destroy the LED. I have used this circuit in a project and it has been working perfectly for about 6-7 months. Why does this circuit work? Here are the Absolute max ratings from the datasheet: And these are characteristics for the device: <Q> You may misunderstood reverse current, see http://www.renesas.eu/products/opto/technology/standard_p/index.jsp <S> The LED is a diode, so it is not intended to conduct in the reverse direction. <S> However if you still force enough high reverse voltage to its pins, this very little reverse current does flow. <S> Scope (with proper insulation transformer) <S> the voltage on the LED. <S> LED - just like any other diode - does have a reverse breakdown voltage. <S> This is the Vr in the datasheet. <S> In reverse breakdown you can imagine the LED as a Zener, so once more than 4V applied in the reverse direction, current will flow. <S> Refer to this picture: http://reviseomatic.org/help/e-diodes/Led-graph.gif <S> You can read more at wiki: <S> https://en.wikipedia.org/wiki/LED_circuit <S> If you drive the led in reverse, performance of the optocoupler degrades over time, see http://www.renesas.eu/products/opto/technology/standard_p/index.jsp Vr. <S> Moreover, as this is a zero-crossing circuit, you can consider using a rectifier bridge then connect the led to the output of the rectifier. <S> This results very clean zero crossing spikes in both halfwave. <A> There is a parameter called integration time. <S> With certain limitations, absolute maximum ratingsgiven in technical data sheets may be exceeded for ashort time. <S> The mean value of current or voltage isdecisive over a specified time interval termed integrationtime. <S> These mean values over time interval, Ti,should not exceed the absolute maximum ratings. <S> This might be a hint if we can also include dynamic reverse resistance and junction capacitance. <A> What is happening, is that when the LED is reverse biased, there is "enough" backwards current (about 1ma) to drop most of the voltage (200v) across the 200k resistor, this leaves about 20v <S> (50v p.p) across the LED. <S> Also, even though the maximum reverse voltage allowed (and not damage the LED) is 6v, in reality it has to be higher in order to guarantee the lower value. <S> Even though your particular LED is "working," you or your company will be open to lawsuits. <S> When the LED fails, it will fail in the mode that says "there is no voltage mains," but this is not true. <S> Then when you want to transfer your liability to the LED manufacturer, it will not work, because you operated it outside its maximum limits .
Therefore it is a good idea to add a standard diode either in series (so no reverse current can flow), or to the led of the optocoupler in the reverse direction (so it shunts the reverse voltage).
What type of relay should I be using? So I'm currently looking to control a lightbulb for some inefficient lighting to produce heat for a mini-greenhouse I am building. I'm looking at plugging in a wall wart 12V DC adapter and wire that to the bulb. In terms of relays/solid state relays what should I look into buying? 12V - 5V DC relay? <Q> The relay selection isn't really mission critical. <S> You rarely if ever directly drive them from a microcontroller, like your Arduino's. <S> Most of the time, a simple npn transistor is used, unless you have a n-channel mosfet you want to use (and it has the right VGS value). <S> The choice of relay for these one off projects is basically cost, and what voltage you have available. <S> Here, you have 12 Volts from the light supply, and 5 Volts from the regulated Arduino rail. <S> Most likely, your powering the Arduino from the 12v supply. <S> It would avoid taxing the limited regulator on the Arduino, and can easily be driven by a transistor on the Arduino output. <S> The relay choice does depend on the light current you expect to drive, and how often you intend to switch it on or off. <S> If your not thinking PWM to reduce the lighting, and are planning minutes or hours on, minutes or hours off, then any common relay would work. <S> Once your switching it off and on in seconds or fractions of a second, then a mechanical relay is no longer an option. <A> So? <S> 12V - 5V DC relay? <S> You buy a relay with a coil voltage to suit your controller and a contact rating to suit the current of your lamp which you haven't specified. <S> ... <S> a lightbulb for some efficient lighting to produce heat for a mini-greenhouse ... <S> Efficient lighting means "converts a high proportion of the input energy into light". <S> If you want heat you are looking for inefficient lighting to convert input energy into heat. <S> Don't make the mistake of thinking you can get more energy out of a lamp in the form of light and heat than you put in (in the form of electrical energy) just by buying something labeled as "efficient". <A> You can easily use a common transistor switching circuit as thousands of people have used with their Arduinos, et.al. <S> simulate this circuit – <S> Schematic created using CircuitLab
The transistor would be determined by the current needed to drive the relay, but pretty much any relay with a coil current of under 1 Amp can be driven by a standard 2N2222. Heck, a typical small relay only needs under 200 mA coil current, so a smaller 2N3904 works too. Since isolation likely isn't needed here (standalone project, no mains switching, no computer connection), then a 12v relay is fine.
Efficiency comparison between six step and FOC for a given motor For a given PMSM motor with sinusoidal back-EMF, it can be driven by either six-step (as in drone applications) or by FOC (as in servo control). However, for a given motor and given DC link voltage, how to compare the system efficiency difference between six-step control and FOC control? The reason to ask this question is that drone development is getting popular and majority of ESC solutions in the market are using sensorless six-step control. However, now people began to talk about sensorless FOC for drone ESC. So I would like to understand their difference in terms of efficiency because for drone, IMHO, flying longer time and longer distance is more important than acoustic noise. <Q> Field Oriented Control (also known as Vector control) of BLDC motors can improve low speed torque and reduce torque ripple at speed. <S> Normally it uses sine wave drive. <S> This works best when matched to motor with sine wave back-emf. <S> However most motors used in multirotor drones are designed for 6 step trapezoid drive. <S> The following traces show the different back-emfs of a 2 pole coreless ironless BLDC motor and a multipole iron cored motor. <S> The second example might actually perform worse with sine wave drive. <S> So to get the best out of FOC drive it needs to 'tuned' to the motor. <S> This might explain why the only two FOC ESCs I could find for sale <S> ( DJI 1240S <S> /X and EMAX WindTalker ) are both intended to be used only with a specific motor. <S> The vast majority of ESCs used in multirotors don't even use synchronous rectification, so their part-throttle efficiency is already ~5% worse than it could be. <S> Also many of them are not using the latest most powerful processors and high speed drivers, so their switching losses are higher at high frequency (this is more important when using FOC drive, because to produce an accurate waveform requires high frequency PWM). <S> The only one way to find out what real improvement FOC drive could make is to try it. <S> For a fair comparison you would have to use the same ESC and only change the commutation technique. <A> From Hobbywing's website : The FOC solution greatly improves the battery's sustainability, increases the flight time and effectively protects the motor and battery and prolongs their service lives via current control. <A> For a given PMSM motor with sinusoidal back-EMF, it can be driven by either six-step (as in drone applications) or by FOC (as in servo control) <S> That is true but as you point out ... what is more efficient. <S> If we take the motor on its own efficiency: Power out = <S> \$T\omega\$ . <S> Power in = <S> \$VAcos\Phi\$ <S> There will still be inefficiencies due to copper losses, iron losses, bearing losses etc. <S> For a machine with a backEMF that is sinusoidal in profile, it would need sinusoidal currents & equally phase currents in alignment with the q-axis of the motor. <S> Anything that distrupts this will reduce the efficiency: <S> Harmonics in the current waveform (quasi-squarewave is rich in harmonics). <S> Angular alignment (an increase in \$\Phi\$ increases the amount of D-axis current) <S> So immediately exciting the stator via <S> 6-step will not be the most efficient method. <S> However... you must view the entire system as a whole. <S> 6-step is easy... <S> zonal firing, simple current regulator. <S> If you were to deploy sinusoidal excitation you would need means to measure the current (either just the DC to reconstruct or the 3phase currents for full visibility). <S> If you wanted to go the route of FOC (and not you do not have to with sinusoidal drives, it just makes the control laws simpler) <S> you would need a micro-controller. <S> Likewise if you have gone to the effort to implement a FOC you might as well implement a SVM block to maximise the utilisation of your DClink voltage (sinusoidal PWM is only 50%, SVM is 86%). <S> All of this adds complexity, time and more importantly weight & volume & this is something of a premium in low-market drones. <S> In the field of UAV's then the choice swings the other way as the increase in complexity is outweighed by the increase in power efficiency
If the motor is supplied with a current waveform that exactly matches that of its airgap flux & in-phase then maximum efficiency can be realised.
Why is capacitor placed in parallel for power factor correction? I understand that to correct the power factor you have to choose a capacitance such that the reactive power from the circuit is cancelled by the reactive power of the circuit using this method . Circuit with power factor corrected: Why does one place the capacitor in parallel (as opposed to series)? Thanks in advance <Q> Without the capacitor, the source has to provide all the energy (St): the actual energy consumed by the load (Pt) and the energy stored in the inductive part of the load (Qt). <S> The inductive part makes the source supply a lot more current than necessary, since a lot of that current goes into setting up a magnetic field that stores some of the energy generated by the source. <S> So the apparent power S (and thus energy) drawn from the source is reduced and is much closer to the true power P actually being used by the load. <A> Current can only flow in a closed loop, so a series capacitor cannot keep reactive current from flowing through the distribution grid, which is the very thing that power factor correction seeks to avoid in order to avoid the resistive losses of that current travelling long distances through practical conductors. <S> Basically, the only way a series compensating capacitor could be effective for power factor <S> would be to tune out the ability of the machine to draw power at line frequency at all, which would make it non-operational. <S> In contrast, parallel connection of an appropriately sized capacitor keeps the reactive current local, constrained to short low-loss wiring runs. <A> Let \$R_C\$ and \$C\$ be the resistance and capacitance of our capacitor, respectively \$R_L\$ and \$L\$ be the resistance and inductance of our load, respectively <S> Then \$Z_C = R_C + \frac{1}{j\omega <S> C}\$ is the impedance of our capacitor \$Z_L = <S> R_L <S> + <S> j\omega L\$ is the impedance of our load As you know, the purpose of power factor correction is simply to decrease our apparent power usage, making it equal to (ideally) our real power usage. <S> The problem with series capacitors With a series capacitor, the voltage seen by our load would become $$V_L = <S> V_S\left|\frac{Z_L}{Z_C + <S> Z_L}\right|$$ <S> But, to prevent overvoltage/undervoltage problems (among other things!), we must ensure that $$V_L = <S> V_S \Longrightarrow <S> \left|Z_C+Z_L\right| = \left|Z_L\right|$$ <S> which, thereby, defeats the whole point of power factor correction altogether! <S> That is, since our total impedance stays the same as before , we still end up drawing the exact same amount of apparent power as before ! <S> So, we win absolutely nothing with this approach to power factor correction. <S> The benefit of parallel capacitors With a parallel capacitor, our load always sees the full voltage <S> \$V_{S}\$ <S> anyway. <S> So, to correct the power factor, an ideal parallel capacitor will simply make $$\operatorname{Im}\left(\frac{Z_CZ_L}{Z_C+Z_L}\right) <S> = 0 <S> \Longrightarrow C=\frac{L}{\left|Z_L\right|^2}$$ for a new total impedance of <S> $$\left|\frac{Z_CZ_L}{Z_C + Z_L}\right| = \frac{\left|Z_L\right|^2}{R_L} <S> > \left|Z_L\right|$$ which means we'll draw less apparent power than before -- thus, satisfying the objectives of power factor correction! <S> But, what about <S> real capacitors? <S> Even though all real capacitors have some \$R_C > 0\$ , the above calculations should be still be valid as long as \$R_C \ll \left|Z_L\right|\$ . <S> But, even with high values of \$R_C\$ , there is still value in doing power factor correction! <S> The only difference is that now we're no longer seeking a power factor of 1 ("unity"), since now we also have to take into account the real power usage of our "real" capacitor as well. <S> So, for a capacitor with a large \$R_C>0\$ , we would make $$C_{real} = <S> \frac{\left|Z_L\right|\sqrt{\left|Z_L\right|^2+4R_C\left(R_C+R_L\right)}-\left|Z_L\right|^2-2R_CR_L}{2w^2R_C^2L}$$ <S> The resulting total impedance will still be greater than our original \$\left|Z_L\right|\$ , and, consequently, our apparent power usage still gets reduced! <S> Furthermore, note that $$ \lim\limits_{R_C\to 0} C_{real} = C_{ideal} = <S> \frac{L}{\left|Z_L\right|^2}$$ is also the same value as we had calculated for an ideal capacitor before.
With the capacitor in parallel, there is now an additional source of energy, which can take up some/all of the burden of supplying current to the inductive load (when it resists changes in current till it sets up its field), after which the source takes over again and recharges the capacitor.
What is the use of NOT CONNECTED pin in 741 opamp? When I attended a viva about the IC Applications, I was asked What is the use of NC pin in 741 opamp? so I answered as it just indicates no need of any connection to that pin in return I was asked then why this pin is placed? Unfortunately I didn't answer. I researched about it and found that it is filler space from this article . Is there anything else I didn't reached in this case?and What is Filler space ? <Q> The reason for no connect pins is that the manufacturers use standard ic packages . <S> The package for a given device must have (clearly) at least the number of pins required to bring out all functional pins; in the case of the 741 (and countless other devices) <S> the number of pins required for functionality is less than the number of pins on the package. <S> On complex devices with hundreds (or more) pins, it is not uncommon to see numerous NC pins. <S> Updated for a comment by Spehro <S> : I should have noted this, especially as I have recently been using a device that has just such an arrangement: The LT3752/LT3752-1 are available in a 38-lead plastic TSSOP package with missing pins for high voltage spacings. <A> I would like to add to the previous answers that there are actually two distinct cases. <S> 1) <S> Extra pin in package not physically connected to the die. <S> It is okay to connect other things to this pin <S> (I have done this on a super tight board because it helped the routing). <S> 2) <S> The pin is used during the manufacturing process to test/trim something. <S> This is much less common. <S> In this case the pin must float as directed by the datasheet for proper operation. <A> There is another reason for NC pins. <S> For example, See Altera's "MAX® V Device Family Pin Connection Guidelines" PDF .In that document it says of <S> NC pins: <S> Do not connect these pins to any signal. <S> These pins must be left unconnected. <S> which would not matter if the pins were truely unwired. <S> Usually the manufacturers, for fear of reverse-engineering, don't disclose the functionality of those pins.
Sometimes those pins are used by the manufacturer to put the IC in test mode during manufacturing validation tests. Those pins that are not required for functionality are still required for the standard package and are simply not connected to anything.
What type of screw driver do I need for this type of small socket 5 sided star screw? I'm trying to open up an electronic device with the following 5 sided star screw nut. I need a precision 5 sided socket screw driver and I cant seem to find the correct one. How would one find the correct tool for these type of screws when we never have encountered them before? Is there a scientific method of calling these screws so we can find the correct tool easily? (ex. Pentagon Star nut screw?) <Q> Most likely, tamper-proof TORX Plus. <S> For this Or Pentalobe , a mainly Apple thing. <S> Some places simply call it Star screws/heads. <S> Check for security bit sets. <S> I've gotten some at a dollar store, home depot, lowes, ebay, the usual places. <S> If you don't care about saving the screw, any common screw extractor set will work. <S> You could also try the melted bic pen method. <A> You generally have two fairly binary choices for security screws as seen in Passerby's excellent answer (+1). <S> You can get a scuzzy set of dubious bits for cheap in a flea market, eBay, ali (but I repeat myself), Harbor Freight (USA), Princess Auto (Canada) <S> etc. <S> or you get an official driver for perhaps 10-100x as much that is made to spec and properly hardened to last for years driving screws every day. <S> Usually the cheap ones are okay enough for DIY projects. <S> Sometimes you can get them in a set with plastic pry tools ("spudgers") that assist in opening the case of the device in question (usually there are a few screws and a bunch of snaps around the outside holding the bezel or case together- <S> you will probably need to carefully pry the case to get the snaps to let go without breaking them- or marring the case excessively). <S> See, for example, this web page: <A> That's a very blurry picture, but generally five pointed screws are either pentalobe (Apple's thing) or 5 point Star Torx, which are tamper proof screws, typically with a post in the middle. <S> Searching either term will pull up loads of results on your online retailer of choice. <A> You will need a security screwdriver, they are often sold in kits with 30-50 different types of bit. <A> My ham-fisted approach usually works quite well, even if it is a little bit unorthodox. <S> Apply a bit of solder flux to the screw head, then heat with a soldering iron until solder will flow. <S> Add a decent-size blob of solder so that you get a nice round surface. <S> Sort of like a ball of solder sitting on top of the screw head. <S> Then, while the solder is still molten, stick a slot screwdriver blade into the solder blob and hold it still until the solder hardens. <S> Be sure to not let anything move while the solder is hardening or <S> the solder will crystallize and you will have to reheat the solder and do it again. <S> When the solder is thoroughly hardened, GENTLY turn the screw out. <S> Redo the whole flux - heating process if necessary. <S> This technique works quite well for those screws in locations where you can get a soldering iron to the screw head. <S> I have found it to be quite effective on screws into plastic, somewhat effective for screws into metal.
The general term for this type of screw is security screw or tamper-proof screw .
Can I use this 950° soldering iron on electronics? I purchased this soldering iron yesterday and after doing some research I'm not convinced it is safe to use on electronics. It doesn't state the wattage anywhere and I can't find it online. It only says that it heats to 950°F, which I'm pretty sure would destroy a PCB. Should I return this and buy a 25 watt soldering iron? <Q> It's inappropriate for electronics. <S> That said, if you have a deft touch it's possible to use such an iron without destroying the copper-laminate adhesive, but I would not recommend it. <S> For example if you tried to desolder something the board would quickly be ruined- <S> pads lifted. <S> You could reduce the power to half by wiring a 1N4007 in series with the mains, but a better choice would be to buy a soldering iron or solder station that has closed-loop control of the tip temperature, preferably adjustable. <S> The 25W iron you mention would not have this feature. <S> Try to pick something with replacement tips you can get locally (or at least readily) in various sizes and types. <A> I've had many experiences using poorly maintained or cheap irons, and it can be incredibly frustrating. <S> This is especially true when it destroys something you poured time and money into. <S> I would consider investing a little bit more money in an iron with variable output. <S> They are typically characterized by a device with a base (not those ones that plug directly into the wall). <S> They can cost 2-3 times what you paid, but I think that it is definitely worth it in the long run. <S> I used a Weller Iron when I was starting out, and it worked quite well. <S> Currently I am using a Hakko FX-888D Iron which was a great investment, reducing time spent fixing mistakes. <S> For the price, it's a great tool. <S> That said, it is not necessary for beginners and the price reflects this. <S> To recap, it is not necessary to spend a ton of money on a soldering iron when starting out. <S> Instead, look for an iron that has had positive customer reviewsand performs well for the cost. <S> I also recommend finding a tool with variable temperature control. <S> This combined will give you satisfying results, and make the learning process much more enjoyable all around. <A> It depends on what your soldering, if its wires and components with long leads then go ahead, but don't "dwell" on any one spot for too long. <S> Don't think about soldering any IC's you'll destroy them. <S> Another problem that you might run into is lead vaporization. <S> Lead melts at about 621 F (327C) and begins to vaporize above 1100F (593C) so get an expensive air filtration system. <S> Or just get a new iron, they have cheap semi-decent Chinese ones available from several places.
As stated previously, I would really avoid using that iron. Look for guides and tutorials online when you begin soldering, and use good technique.
Is it worth to try making peltier based refrigerator? I built a box from 5cm thick Styrofoam with outer dimensions 60x55x80cm. I wanted to use it as a fermentation chamber for home brew purposes. The reason why I did not choose regular (some used cheap) refrigerator is I need to have it in a small chamber with floor dimensions of 80x130cm. Also door of that chamber is only 55cm wide. So getting inside some refrigerator would be most likely impossible. I decided to buy 2 peltier modules - TEC-12715 and attached them to a "hot side heatsink" of dimensions 23x17x4cm and "cold side heatsink" of dimensions 10x17x5cm. I am powering each separately from the dedicated ATX power supply which each can handle 18A max at 12V, so there is a reserve. Hot side is being cooled by 5 fans removed from old cases and atx power supplies. The smaller, cool heatsink is inserted into the top part of the box and sealed. On the cold heatsink there is one 12cm fan. Unfortunately I can only get max of approx 18 degree Celsius inside (35L barrel with approx 20L of a beer inside + 4x 1.5L bottles with water just as a accumulation of the "cold").Hot heatsink has most of the time 50 degrees Celsius (I have a temp sensor sticked on it's middle part (near to the peltiers which are also in the middle)). Ambient temperature is now 29 degrees Celsius. So it is a difference of max 11 degrees of Celsius . Should I consider this to be "normal" when considering inefficiency of peltier modules? Or should I be able to squeeze more cold inside? I read somewhere on the internet the peltier's efficiency is terrible once the hot side is over 40 degrees, but I am not sure if this is true. I planned to do some temp regulation as well, but as I wanted to get closer to 15 degrees Celsius, this does not have the reason now. Any ideas, suggestions? <Q> It would be worth while to try and reduce the temperature of the hot side. <S> The biggest heat leak into your fridge is going to be through the TEC. <S> The hotter <S> it is the more heat it leaks.. and the more it has to work to get that heat back out. <S> The "classic" mistake with TEC coolers is to make the hot side heat sink too small the first time. <S> (Welcome to the club <S> :^) <A> You might try looking at what commercial units do. <S> Here, for instance is a commercially available TEC cooler chest. <S> It's smaller than yours, but it only draws 3.5 amps at 12 volts. <S> Note that it is specified for a 36 F (20 C) temperature differential. <S> You don't say how long you've run your cooler, but be aware that 26 liters of water takes a lot of cooling. <S> As per the Amazon link, you should wait at least 24 hours before making your measurements. <A> I am using a peltier in an old mini fridge. <S> It’s working well; it keeps freezing the inner cold sink. <S> I’m using a small 120V fan from Walmart on the hot side, and a little computer fan inside. <S> It gets cold until it builds up too much ice, after which I have to turn it off for about 30 min. <S> Remember that the more heat you dissipate the colder it will get; less moisture also means greater cooling. <A> Will achieve a inside temp of 27 deg below ambient, while full of beer.
I notice no one has mentioned using water cooling for hot side, my best wee chilly bin, uses a peltier at each end, with water cooling via a water pump, and small radiator and fan.