source stringlengths 620 29.3k | target stringlengths 12 1.24k |
|---|---|
Are Gerber files of inner layers interchangeable? Regarding the Gerber files of a 4 layer PCB with 2 inner layers: Top layer: Analog signals Inner layer 1: Digital signals (SPI and I2C) Inner layer 2: Power planes (+3.3V and +12V) Bottom layer: GND plane For better isolation between analogue and digital signals and to prevent digital noise from coupling back into analog signals, I would like to swap the two inner layers, so that there is a power layer between analogue and digital. Unfortunately, my EDA software (KiCAD) does not permit to reorder the stack. This is why I'm wondering if it would be possible to simply rename the two Gerber files before sending them to the producer, so that the layers get swaped? My guess is that the layers are only connected through vias and PTH's for THT components. Am I missing something here, that might break the PCB, if I manually rename the files ? <Q> The Gerber files do not specify the order of layers. <S> As long as you don't use blind or buried vias, the layers can be stacked in any order. <S> I always included a "readme" file with my PCB order specifying the desired stack-up order. <A> If you want to swap the layers directly in KiCAD, the option you need is Edit -> Swap layers . <S> After swapping the layer contents, you can update the layer names in Design rules -> Layer setup . <A> To make sure that you didn't make mistake, install an external Gerber file viewer (not connected to or produced by KiCAD). <S> Then, after renaming, view the project in this viewer and imagine what others would see. <A> I go one further than Peter in the other answer and include both a "ReadMe. <S> Txt" file and a FAB drawing with each PCB order. <S> The FAB drawing is very detailed and includes information related to stackup, board dimensions, fabrication notes including solder mask and silkscreen ink colors and any special requirements for: Breakaways Bare board testing requirements Non-plated hole locations and dimensions Dielectric thickness specifications between layers. <S> Do note that you have to be careful what vendors that you order your first boards from. <S> Some prototype PCB houses think they do not have to look at your FAB drawing and will try to build your PCBs to some standard recipe and you will end up with something different than you expect. <S> I have had first hand experience with this from at least one USA prototype PCB vendor that refused to look at the FAB and made boards that were totally useless. <S> I have had very good luck with using vendors in China that followed my FAB and made exactly what I wanted.
| The file names for the individual Gerber files may vary between different CAD systems, and may or may not imply the desired stack-up order. Assuming you corrected the vias accordingly, yes, you can rename the files the way you want in order to change the layer.
|
Find the Thévenin equivalent circuit with the output terminals A and B Im not sure where to start with this question. I dont know what to do with the dependent source and im not sure what I can do with terminals A and B. <Q> To find \$R_{th}\$, connect a fictitious voltage source of 1V between A and B, so that \$V_x = 1\$ <S> Now find I using KVL, <S> $$-2 + <S> 6 - 1- 2I = 0$$which <S> means, \$ <S> I = <S> 1.5 <S> A \$ <S> Therefore, the thevenins resistance \$R_{th}\$ seen from AB will be just \$ <S> V_x <S> /I <S> = 1/1.5 = 0.667 <S> ohms \$ <S> To find \$ V_{th}\$, look at the original ckt. <S> It is the open circuit voltage across AB = <S> \$V_x\$ <S> \$V_x\$ <S> should be equal to \$-2V_x + <S> 6\$ <S> because current through 2 ohms = 0. <S> $$ V_x = -2V_x + 6 <S> $$$$\implies <S> V_x = 2V <S> = V_{th} <S> $$ <S> Now you can draw your thev.eq.ckt using \$ R_{th} = <S> 0.667A \$ and \$ V_{th} <S> = <S> 2V \$ <A> simulate this circuit – Schematic created using CircuitLab <S> You should start as usually by finding \$V_{TH}\$ voltage using any technique you know. <S> \$V_{TH <S> } = V_X+V1 = V_X+6V\$ <S> So all we need is ti find \$V_X\$ voltage. <S> So we can write a KVL equation: $$V_2-I1*R_2+2V_{AB} = 0 <S> $$ $$V_2-I1*R_2+2(V_X+V_1) <S> = 0 <S> $$ <S> $$V_2-I1*R_2+2((V_2 - I_1R2)+V_1 <S> ) = 0 <S> $$ <S> $$I = <S> \frac{2V_1 + 3V_2}{3R_2} $$ <S> And the Thévenin voltage is: $$V_{TH} = <S> V_2 <S> - I_1 R_2 <S> + V_1 = <S> V_2 - \frac{2V_1 + 3V_2}{3R_2 <S> } R_2 <S> + <S> V_1 = \frac{V_1}{3} = 2V $$ <S> Next we need to find the \$R_{TH}\$ resistance using this circuit diagram. <S> simulate this circuit As you can see I short A and B terminals. <S> And I want to find this short circuit current \$I_{SC}\$ <S> Additional notice that since we have a short across A-B terminal \$V_{TH} = 0V\$ <S> hence \$V_X = 0A\$ <S> So, the \$I_{SC}\$ current is: $$I_{SC <S> } = \frac{V_1}{R_1} = 3A$$ <S> And the \$R_{TH}\$ resistance is $$R_{TH <S> } = \frac{V_{TH}}{I_{SC}} \approx 0.667 <S> \Omega$$ <S> And the equivalent circuit will look like this <S> simulate this circuit <A> There are several ways to find the Thevenin equivalent. <S> Here is one that makes it particularyly easy to solve this problem: Find short circuit current and find open circuit voltage : <S> Note: you can ignore completley the left subcircuit (4V voltage source and 6\$\Omega\$ resistor) as the voltage across its terminals is completely determined by the dependent voltage source and the dependent voltage source <S> does not depend on any quantities of that left subcircuit. <S> open circuit voltage (i.e. \$i_X=0\$): \$v_{oc} = <S> -2v_X <S> + <S> 2\Omega \cdot i_X + 6V = <S> -2v_{oc} <S> + 6V\$ <S> → \$v_{oc}=2V\$ short circuit <S> current <S> (i.e. \$v_X=0\$): \$i_{sc} = <S> \frac{6V - v_X}{2\Omega} = <S> \frac{6V - 0V}{2\Omega} = <S> 3A\$ <S> So you get \$v_{Th} = <S> v_{oc}\ = <S> 2V\$ and \$R_{Th}=\frac{v_{oc}}{i_{sc}}=\frac{2}{3}\Omega\$
| You should start as usually by finding \$V_{THE}\$ voltage using any technique you know.
|
How can I determine the circuitry in this module? Previously I asked about the purpose of this module. Now I'd like to figure out how module accomplishes that by diagramming the circuitry. What is the best way? The backside of the PCB is obscured by a metal disc. Is it sufficient to check for continuity between all the exposed terminals using a basic multimeter? What cautions do I need to take to avoid damaging the module? For example, does the capacitor need to be discharged before, and if so, how? Or, is the white layer removable in a non-harmful way with a solvent? and if so, with what solvent? and would this reveal enough information to draw the schematic? <Q> There aren't a lot of parts. <S> The brute force method is to de-solder all the parts, then use a continuity tester to make a map of the connections. <S> A simpler way that will probably work is to use a ohmmeter in low voltage mode. <S> This is intended to not turn on silicon semiconductor junction. <S> In other words, a diode looks open regardless of polarity. <S> Not all ohmmeters have such a mode, but it's out there. <S> This won't help with resistors, but if all the resistors are large enough compared to a copper trace, then you can still distinguish a direct connection. <S> If you can't find a ohmmeter with a low voltage mode, you can rig up something yourself. <S> You're not actually trying to measure resistance, but determine whether the resistance is below a threshold. <S> One way you can do this is to start with a fixed current source, like 1 mA. Clamp that with a diode so that the voltage doesn't exceed 700 mV or so. <S> Then look for the voltage dropping to maybe 100 mV or lower. <S> At that voltage, no semiconductor junction will conduct. <A> Simply "track" the tracks on the PCB via your own eyes and use the multimeter continuity tester. <S> And ZD1 is a Zener diode. <S> See the example how I did it <S> A 1USD 11W led bulb circuit and parts analysis <S> But I suspect the circuit may look something like this <A> Or... <S> You can do it the easy way and just click on this link for the datasheet: <S> **** <S> EDIT: <S> Sorry. <S> I thought that link would take you right to the part. <S> Try this revised link. <S> It may work. <S> http://www.datasheetspdf.com/pdf/774462/ETC/AMC7136/1 <S> If not, use the following homepage link and then search for AMC7136. <S> Scroll down a little until you see your part number and then click the PDF symbol to the right of it. <S> Don't click on the big blue words at the top that say "Download Datasheet". <S> That will point you towards a part that they are advertising. <S> http://www.datasheetspdf.com/ <S> The second schematic on the datasheet resembles what you have. <S> I would start with that one. <S> The datasheet also explains the operation of the device and circuit. <S> Here’s a tip for new techs: If you’re ever working on something, have no schematic for it and believe the problem involves an IC or the circuit around it, check the datasheet. <S> The example schematics are provided as a starting point for engineers. <S> Unless modifications need to be made, more often than not, the circuit in the datasheet will be very close, and sometimes exact, to the device you are working on. <S> A lot of engineers will just copy the circuit and use it in their product. <S> Why reinvent the wheel, right? <S> Good luck!
| You could reverse engineer the circuit with a multimeter and also by holding the circuit board at various angles to see the traces under the white masking... The effective resistance needs to be 100 mΩ or lower to trigger your continuity tester.
|
Using sensors to get a ball's position Is it possible to use some sort of sensors (or if anything else is better please tell) to obtain a football's position within the goal posts? What I am trying to achieve is the position of the ball as a goal is scored. Then using that position (x,y) to plot. I have little no experience with this sort of thing (software engineer) so just wondering if its possible. If so, I would love elaboration or research links. <Q> There are various research papers related to this topic and many methods have been tried. <S> Assuming you cannot use 35 cameras as mentioned above or a Hawk-eye camera based system, you may want to consider a radio transmitter placed inside the ball. <S> This generates a signal that is in turn is picked up by antenna attached to the goalposts ( see this patent document for more details). <S> There are some distinct advantages to this technology, the signal transmitted by the ball shouldn’t be significantly affected by weather, lighting or the presence of players. <S> As long as the antenna receive the radio waves, a computer system can theoretically calculate the ball’s position. <S> A disadvantage, is the necessity of placing a radio transmitter within the football. <S> Firstly, the exact point from which the signal is transmitted will determine the position of the ball. <S> This seems obvious, but if the transmitter moves within the ball, the system will deem the ball to be moving, even if it isn’t. <S> This has significant implications when millimetre accuracy is demanded and the ball deforms significantly when struck. <S> Source: <S> Here <S> Something similar is implemented in the iBall recently approved by FIFA. <S> Inside the soccer ball are sensors that wrap around the ball in both directions. <S> The second that the entire ball has passed the goal line, the system sends a signal to a watch that the referee wears allowing the referee to know the goal was made. <S> Adidas are also developing Smart Ball technology. <S> The ball has a computer chip inside that will relay information to the referee. <S> This also makes use of cameras however. <S> Cameras are placed at a certain angle so that it is clear when the ball crosses the goal line. <A> What I am trying to achieve is the position of the ball as a goal is scored. <S> I think this can be solved quite reliably with four cameras. <S> Remember that a goal is scored the instant the whole of the ball passes out of the field. <S> The edge of the field is defined as the outside of the lines, which is also supposed to be the back edge of the goal posts. <S> I haven't tried this, but my first reaction is to put two cameras on the back edge of each of the vertical goal posts. <S> One maybe two feet from the bottom, the other two feet from the top. <S> Remember that the goal opening is 8 feet tall. <S> These cameras would be looking right down the plane that is the edge of the field. <S> They are therefore in the best position to see if a goal was actually scored, and if so, exactly when. <S> Since the positions and the view angle for each pixel are known up front, it only takes two cameras compute a position. <S> With four cameras, the goal keeper can be blocking the view from one side, and you still get the minimum two views you need from the other side. <A> Therefore vision systems are the only reliable method. <S> Computations must be made seconds after an event is triggger to determine the 2D tangent surface of camera at the ball position near the goal line. <S> When the intersection of each tangential surface using the computed ball centroid and it's circumference has crossed the inside of the goalpost plane a goal is detected and then the relative parallelogram computations result in where the XY intersection occurs relative to the goal aperture and thus the position of the ball crossing the goal as if it were viewed straight on. <S> Although 4 (2D) cameras may be sufficient , just as in GPS location, six cameras (6) are better set for situations with ambiguity, visual impairment low SNR etc. <S> to improve error rate to acceptible levels. <S> As it happens I found out such a system has already been FIFA qualified "HAWKEYE" <S> http://www.fifa.com/about-fifa/news/y=2012/m=7/news=ifab-makes-three-unanimous-historic-decisions-1660541.html <S> 1 <S> mm tolerance indicates pretty high resolution. <S> Accuracy is 1mm crossing the goal line. <S> Ball position accuracy is the size of the ball.
| Embedded sensors in the ball were tried and ruled out because of the lack of range resolution with near field sensors and rotation of the sensor. The sensors inside the ball consist of a web of copper wire that uses induction, allowing communication with an antenna array that is mounted to the goal frame. A more recent photo, this time from the Hawk-eye site illustrates 7 overhead cameras with the ability to detect a goal with any 2 camera and position accuracy improves with the number of camera and quality of image.
|
Is it cheating if I use an SRAM chip as a register file? I'm currently building my very own processor using discrete logic (74LSxx series and that) using my very own instruction set architecture. I am starting to rack up a bit of a bill because all the chips I need. For any of you that have seen people build homebrew CPUs like this, no one that I have read about has used an SRAM chip as their register file Would it be considered cheating if I used an SRAM chip as opposed to a TON of '374 flip flop chips for the registers? This is supposed to be a processor made from discrete chips <Q> No, it is not cheating around any rules. <S> This is because there aren't any rules in the first place. <S> This is your project. <S> You define it. <S> You can implement it in any way that satisfies you. <S> Nobody else cares. <A> As asked, this is purely a matter of opinion, hence not a valid question here. <S> However do note that conventional processors require 3 port access to the register file - two to obtain operands, one to write back the result. <S> So you'll need substantially more complex datapath and control circuitry to prefetch and cache operands if you want to use a single port RAM. <S> Or you can build something with a distinct and more privileged accumulator, and an instruction set only permitting a single non-accumulator register to appear as a source or destination (for that matter, there are CPUs with only an accumulator) <S> Using two memories written in parallel and read individually could simplify things a little, but that trick works best with dual port memories. <S> Those are something you can buy in IC form, as well as being the common form of FPGA block RAMs. <A> Your project, your rules. <S> But as they say, "perfect is the enemy of done ". <S> If the project seems to be getting out of hand (be it with regard to complexity, time required, or money required), cutting yourself some slack on one corner of the design may be the difference between a nifty (if a bit insane) accomplishment, and a project that never got finished. <S> Besides, even if you do decide to "cheat", there's always the option of making version 2 later, with less shortcuts. <S> You could even try to take a possible future "upgrade" into account in the first version of the design, if you want to spend some time to make the "upgrade" easier. <A> Yes! <S> It is even cheating if you use any components you haven't made yourself solely from the natural materials you can find in your own garden. <A> I am also aware of a processor that uses single ported SRAM for the general-purpose register file, a very fast/efficient processor in fact. <S> Where do you think the term register file comes from? <S> Registers in an SRAM. <S> With a pipelined architecture you could have a single port SRAM and not necessarily have a performance hit. <S> Yes, it is perfectly fine to implement your "registers" in a register file built from an SRAM, so long as your design works. <S> I am not sure if you are trying to implement an existing processor/instruction set or make your own. <S> In either case, doing it with discrete 74xx parts, performance is not necessarily a goal compared to sanity and success. <S> If you have some SRAMs from that generation then absolutely. <S> You can tie LEDs to the address and data bus to make (more) blinky lights showing signs of life. <S> Or you can take the 6502 approach, and have 256 virtual or indirect registers (page zero) that are just memory or perhaps special memory. <S> And your real general-purpose register (or general-purpose registers) are few and fit in a few parts. <S> It worked quite well for MOS Technology . <S> (The architecture, not necessarily the implementation, although they have shaved and scanned and reverse engineered the 6502 <S> so you can see how they actually did implement it). <S> You probably also want to "cheat" and use a ROM for the look up table that implements your microcode... <S> Even though that isn't cheating either; it is a known solution. <A> People who make homebrew CPUs (have you looked at any?) <S> tend to use SRAM for registers. <S> Nobody in their right mind would solder up a load of flip-flops. <S> Never mind affording it, the power needs, and the chances of getting it wired up properly. <S> You don't need dual-ported if you only ever do one read or write at a time. <S> So to, eg, INC a register, have your CPU read it on one cycle, into a buffer. <S> Increment the buffer in the next cycle, then write it back in a third. <S> Time-multiplexing! <S> Some sort of buffering will be needed if you're feeding 2 registers' contents into a ALU. <S> You could perhaps use just one buffer and get the second operand "live" from the SRAM. <S> But of course there's no "increment" pin on an SRAM chip! <S> You'll figure out where the buffering needs to be. <S> That said, there are 74-series registers. <S> Originally entire CPUs were made of 74-series, or at least discrete logic chips, before the 74 series was invented. <S> Searching "74 series register file" gave a few leads. <S> Though of course just because it was made once doesn't mean you'll find it now. <S> Have you looked into FPGAs, or even CPLDs and PALs? <S> PALs are too small to do a CPU with, but a few of them mixed in with the other logic might save you a few chips. <S> In an FPGA though you could implement entire CPUs. <S> FPGAs are basically thousands of logic gates on a chip. <S> You can choose what logic each gate does, and how they are connected. <S> You do this by writing code, like software. <S> Then shoot the results down a USB lead to a programmer. <S> FPGAs are used a lot in consumer goods, and in many many other fields.
| A single ported SRAM implementationwould make sense for having a lot of registers, say 128, 256, or 512 general-purpose registers.
|
How to recognize a Germanium Diode I am in a little trouble and seeking some help. I got a bunch of mixed up diodes from an old collection. I know there are few diodes which are Germanium Diodes. But they look so similar to 1N4148 and similar transparent case diodes. The problem is, the diodes are old (but working) and it's very difficult to read the numbers printed on them. How can I identify and distinguish Germanium diodes? Can I measure something with a multimeter, or create a simple circuit to identify the Germanium diodes. I am looking for identifying diodes like 1N60 and 1N34A. I would highly appreciate your help! <Q> Use this schematic to test the diodes. <S> You can easily distinguish Silicon and Germanium Diodes. <S> Silicon diodes should read approx 0.7V and Germanium diodes should read 0.3V. <S> A little difficult to distinguish Schottky diodes though. <S> They should show approx 0.2V which is close to 0.3V. <S> If you have a very stable power supply and a good meter you can distinguish this as well! <S> Good Luck! <S> simulate this circuit – <S> Schematic created using CircuitLab <A> Rig up something that puts a little current thru them, and measure the voltage. <S> For example, a 5 kΩ resistor in series with a 5 V supply should do quite well. <S> The current will is limited to 1 mA, and the reverse voltage to 5 V. <S> Neither should hurt any of the diodes you have. <S> Silicon diodes will have around 650 mV forward drop. <S> Germanium will have about half that. <S> Note that silicon Schottky diodes have about the same voltage drop as germanium diodes. <S> If you think there might be some Schottky diodes in the mix, then it gets more complicated. <A> http://en-us.fluke.com/training/training-library/test-tools/digital-multimeters/how-to-test-diodes-using-a-digital-multimeter.html <A> Using diode test mode on a DMM is the best way. <S> It will use some standard fixed current like 1mA to measure voltage up to maybe 3V. <S> This is also useful for comparing LEDs. <S> If you dont have a DMM, get a good one. <S> Opinion <S> There is no real need for old Ge , as Schottky performs better and diode capacitance * forward Rs= (ESR) resistance , which is relatively constant is better on Schottky and ESR= <S> k/Pd for power rating Pd. <S> In fact some manufacturers are making the 1N60 with Schottky Silicon instead of the original Germanium. <A> Real world germanium diodes (even recent production) almost always come in a larger glass body (diameter about equal to or even thicker than a 1N4007, slightly longer. <S> Not unlike a small reed switch), either left clear or painted black. <S> With clear case examples, the insides will appear mostly see-through hollow (instead of mostly filled with red/orange copper tampers, like you will see on a 1N4148 or similar), sometimes with a visible hair thin wire going towards the actual semiconductor element. <S> This old case style has been used for silicon parts too, but is VERY uncommon for them. <S> Germanium semiconductors in molded plastic cases are an ABSOLUTE exception (only one I am aware of is the AF279 HF transistor), since most germanium parts were made in processes that require the part to be kept in a clean, hermetically sealed case (which plastic molding does not reliably provide). <S> So, anything plastic molded will be silicon. <S> For power diodes, the same style of metal cases have been used for both Si and Ge devices. <S> If the labelling is partially readable: european parts whose designation starts with "A" are always germanium, "O" is so old that is is LIKELY germanium, "B" is silicon.
| Germanium diodes have a lower forward voltage drop than silicon diodes. The continuity test function on many multimeters has a "diode" setting that will tell you what the forward voltage is, from which you can infer the type of diode.
|
Voltage drop across a resistor I have been trying to figure this out, kinda a amateur question actually! Looking at the above circuit diagram am wondering why there is no voltage drop across the resistor R1? A part of voltage should be dropped out by the R1 isn't it? But the simulation in Proteus shows that the Voltage across both R1 and R2 remains the same. Can anyone help me out on this one please? <Q> I’m going to try to combine the other two answers and bring it down in level just a notch. <S> All of this, as Innacio noted, are based on a simulation of ideal components. <S> In real life, the results are slightly different. <S> First, because the voltage is DC, the capacitor will fully charge and effectively become an open circuit. <S> Just pretend it isn’t there. <S> The volt meters are infinite impedance and so draw zero current. <S> Now look at Ohm <S> ’s law: voltage = current x resistance. <S> Because the current draw is zero, the voltage drop over each resistor is also zero. <S> Thus, you see 9 volts at both nodes. <S> In real life, this will be different, though maybe not enough to see. <S> The cap will have leakage current, effectively becoming another resistor. <S> However, the effective resistance may be large enough to dwarf the actual resistors. <S> Also, the volt meters will draw some current, but again, it may be so small that it has little effect. <A> The capacitor becomes fully charged. <S> The current thus is zero, and the voltage drop across both resistors is zero. <S> Did you not run a .DC sim? <A> This is ohms law <S> i.e. V = IR. <S> More checks: - <S> If a capacitor has 9 volts across it and it is fed from a 9 volt battery via a resistor (or two) <S> then the formula for current is C.dv/dt <S> (where dv/dt is the rate at which the capacitor voltage is changing with time). <S> Given that the capacitor voltage is constant at 9 volts, there can be no current through the capacitor. <S> This is in agreement with the resistor analysis and you can only really conclude that there is zero current flowing. <A> There is a voltage drop, but it is insignificant compared to the voltage drop across the capacitor due to its leakage current. <S> And with ideal models such as those available in most simulators there will be no leakage current and no meter current and hence no drop across the resistor.
| If there is no volt drop across a resistor, it means that there is zero current flowing or the resistance is zero ohms.
|
What does changing the throttle to a BLDC ESC actually do? Background : I understand how the standard 50 Hz PWM protocol for hobby electronics works: varying the on-time from about 0.5 ms to 2.5 ms will throttle an actuator from roughly 0% to 100% thanks to embedded controllers inside the servos or electronic speed controllers (ESC) connected in series to a motor. In the context of a BLDC motor, I understand that the ESC generates a trapezoidal voltage to 2/3 phases of the motor. The ESC looks for the zero-crossing point of the back EMF on the third phase to energize the next pair of phases with a built-in lead (30 degrees IIRC). I don't understand how the electronic speed controller (ESC) acts upon these throttle commands. My questions : Does changing the throttle change the duration of the trapezoids? Using a standard multimeter, I measured the RMS voltage and current between an ESC/motor running at ~50% and ~75% throttle. I know the meter probably isn't rated for such high frequencies, but I trust the fact that RMS readings increased from 50% to 75% throttle. This suggests the ESC is modulating the duration of the input voltage trapezoid to the motor since the peak voltage value is fixed by the battery (unless the ESC also somehow modulates that?). Note , I just realized I can test for this with an oscilloscope. I will do this tomorrow! How does the ESC maintain unity power factor? Does/how does it also control current? I'm assuming it aims for PF = 1 since this maximizes torque. Does changing the throttle setting below a certain point change the \$k_{t}\$ of a motor? The second plot linked below compares changing input voltage at 100% throttle to changing throttle at constant voltage. I understand that decreasing input voltage (input RMS voltage would be correct, yes?) shifts the torque-speed curve down and to the left, but why does the torque-speed response also start "drooping" at lower throttle settings? Re-arranging the DC-equivalent model for torque as a function of speed, \$T = [Vk_{t} - {k_{t}}^2\omega]/R\$, the only way slope can change is if \$k_{t}\$ or R change. Slightly unrelated to hobby BLDC motors, but do the similar sinusoidally-driven BLACs (aka PMSM?) energize all 3 phases at once? If so, then one cannot not sinusoidally drive a BLAC motor with sensorless control based on back-EMF measurements, correct? Plots from ARL technical report 6389 . <Q> Typically, 1mS corresponds to 0% or zero voltage, and 2mS corresponds to 100% or full voltage. <S> The ESC continues to automatically commutate the motor as it turns, using zero voltage sensing on the un-energised phase. <S> Obviously sensing this while PWM'ing the other two phases requires care, to avoid being upset by the switching transients. <S> As the mean voltage varies, so the speed varies. <S> The motor draws as much current as it needs from the ESC in order to maintain its speed. <A> Does changing the throttle change the duration of the trapezoids? <S> Not directly. <S> The duration of the 'trapezoids' is determined by commutation, which is synchronized to the rotor position. <S> The motor will rotate at the speed it wants to, determined by the applied voltage and torque load. <S> The controller must respond to this by commutating at the same speed and phase. <S> The throttle controls effective motor voltage by applying high frequency 0-100% PWM, so you could say that it indirectly affects commutation timing because motor speed is proportional to applied voltage. <S> However motor speed is also affected by loading, which may vary independently of throttle level. <S> How does the ESC maintain unity power factor? <S> The ESC may adjust commutation timing to compensate for the lagging current caused by winding inductance. <S> Some ESCs do it automatically, others have fixed timing advance settings. <S> With fixed timing unity power factor is rarely achieved, and the best setting is usually a compromise between power and efficiency. <S> Does changing the throttle setting below a certain point change the kt of a motor? <S> The controller relies on winding inductance to smooth out the current. <S> However in most ESCs the PWM frequency is barely high enough to maintain continuous current flow. <S> As the throttle is lowered (and PWM ratio reduced) current ripple increases until the current waveform becomes a discontinuous sawtooth. <S> Since torque is proportional to average current this lowers the effective torque constant. <S> At higher loading the current becomes smoother so the effective torque constant increases, causing the torque/rpm curve to become nonlinear. <S> do the similar sinusoidally-driven BLACs (aka PMSM?) <S> energize all 3 phases at once? <S> Yes, or no - depending on the controller. <S> There is no fundamental difference between BLDC and PMSM. <S> It is possible to power a BLDC motor with 3 phase AC, but with all 3 phases continuously powered you can't extract back-emf for zero-crossing detection. <S> However the PWM can be modulated to shape the trapezoid waveforms into a 'saddle' profile, which becomes a sine wave when the two driven phases are combined. <S> How to Sinusoidally Control <S> Three-Phase Brushless DC Motors <A> Separation of functions. <S> 1) <S> The ESC sees the incoming servo pulses, and decodes their width to a value describing the desired speed (usually 0 to 100%). <S> 2) <S> This is translated to a desired PWM duty cycle (0 to 100%) at the PWM rate, which is unrelated to the trapezoidal motor drive waveforms, and usually much faster. <S> 3) <S> The details will vary between sensored and sensorless and how soft starting works, but when running, the ESC monitors rotor position (either via sensors or back EMF) and generates the trapezoidal pulses following that position indication <S> NOT the PWM pulse width. <S> Thus, as you increase the throttle, the mean voltage delivered to the motor increases (by increasing PWM %) but the trapezoidal pulses don't change. <S> If that voltage allows the motor and its load to accelerate, only then will the position <S> sensing change the trapezoidal pulse timings. <S> So the answer is - yes, the trapezoidal pulses will change, but not directly or immediately following an input change.
| In a standard sensorless ESC, changing the throttle command to it changes its output PWM duty cycle, and so the mean voltage, delivered to the motor.
|
Large button to power on/off items for disabled son My husband and I have a disabled son. We are going to build him a sensory panel wall. We would like to have large buttons that he can push to power on/off moving toys/lights/etc. However, we cannot find any that do this. We have tried this: https://www.amazon.com/gp/product/B012E8D7XW/ref=ox_sc_sfl_title_18?ie=UTF8&psc=1&smid=A2M1GBFESN1F7H But the button is too small and hard to push (he has spastic quad CP with dystonic and choreathetosis movements.) The target has to be large and easy.My husband also tried making a wooden round spring piece over it. It is very difficult for our son to put enough pressure to turn it on. He needs something like this with extension cord: https://www.amazon.com/gp/product/B00CYGTH9I/ref=ask_ql_qh_dp_hza However, this button is momentary. It only sends power to it while he is pushing it. We need a big button like this that will turn the power on once pushed. Second push would turn it off. OR if it automatically went off after a minute or two of turning it on, that would be wonderful. I searched the internet yesterday for hours and found nothing. Any ideas/help would be greatly appreciated!! <Q> Use a latching relay with the switch like Crouzet PJRS110A. Use the switch to activate the relay and the relay to switch the device. <A> If your handy you may want to make your own. <S> The cover should have a hole in the back that press fits, or glues, onto the metal button. <S> If you need it really big, you may want to use multiple momentary switches per cover, and add a circuit to do the latching/unlatching function. <A> From my experience, I would use standard wall switches or "doorbell" buttons mounted onto a wooden/MDF panel. <S> What I mean is things like this, but not wall-mounted: <S> You should be able to do this reliably with standard DIY equipment. <S> For the wiring behind, I would recommend using redoable WAGO clamps so that things can be rewired as necessary. <S> This would be a standard solution, the switches are large and basically impossible to break. <S> (The good ones can be stood on without any damage!)
| Buy some nice toggle buttons with metal shafts and use wood or plastic to create your own cover.
|
Bit sequence from USB HID Joystick I wanted to connect a USB HID Controller to an Arduino and use it to control a Remote Controlled car. But I'm unable to find what bit sequence is sent when a button is pressed on the joystick. I would like to find what that sequence is. I'm running a dual-boot system with both Linux and Windows, so software that works on either one is fine. <Q> But I'm unable to find what bit sequence is sent when a button is pressed on the joystick. <S> I would like to find what that sequence is. <S> There isn't one. <S> USB devices operate on a polling basis. <S> They do not communicate over the bus outside of a transaction from the host. <S> Getting to the point of communicating with the device takes a significant amount of work to enumerate the device and configure it with an address. <S> Even once that's all done, there isn't a specific message used when a button is pressed. <S> The status report response from the device will have a field which includes a bitmap representing the buttons on the mouse -- one of the bits in that field will be set when the button is being held down, and clear when it is not. <S> Since the Arduino lacks a USB host peripheral, and runs at a relatively low speed (8-16 MHz) compared to USB line rate (1.5 or 12 Mbit/sec), it will be excruciatingly difficult, if not impossible, to implement a USB host on an Arduino device. <A> USB really isn't that easy, but luckily somebody has already written a library and example code, you can find it here . <S> Also code that works on Linux or Windows would be irrelevant to Arduino. <S> The above example uses the USB Host shield . <S> You need to have an Arduino that supports USB Host mode, not just USB Device mode. <S> Usually these Arduino's have USB-A ports or USB OTG ports. <S> By the way, if you have specific Arduino questions, there is an Arduino Stack Exchange that is better equipped to handle these types of questions. <A> Based on your comments, what you want is the usbhid-dump package for linux. <S> This will print out in real time the hid descriptor for your usb hid joystick.
| Use an accessory which implements a USB host for you, like the USB Host Shield, or use a different microcontroller which supports USB host operation.
|
What makes a LED a laser diode? High-level survey of this question is fine: After reading https://en.wikipedia.org/wiki/Laser_diode I still can't tell if the electronics that enable a diode to lase are different from those that enable it to emit light. So, in general, is a laser diode a LED plus some sort of optical resonator or cavity? Or are any laser diodes themselves electronically distinct from non-laser LEDs, meaning they don't look like a LED plus some extra physical structure to allow them to act as a laser? <Q> I still can't tell if the electronics the enable a diode to lase are different from those that enable it to emit light <S> It's not the electronics, it's the optical cavity. <S> If the optical signal is fed back through the gain medium (the PN junction) <S> such that the round trip loss is no more than the round trip gain, an "LED" will start to lase. <S> A laser diode's cavity can be formed by cleaved facets on the surface of the chip, Bragg reflectors patterned into the chip, or even external lenses and/or mirrors of some kind. <S> Generally, a device designed as a laser diode will also include a waveguide structure on the chip (and overlapping the junction) to facilitate low round trip loss, while a device designed to be an LED won't have any distinct waveguide structure, though there's also such a thing as a resonant cavity LED (RCLED). <A> LED: <S> The voltage on the diode lifts the free electrons across the bandgap to a higher level. <S> They emit light when they drop back to the lower level. <S> Due to the rules of quantum mechanics when this happens spontaneously is random if no other measures are taken. <S> The degrees of freedom in a LED allow for variable wavelengths (frequencies) and point in time. <S> Thus the emitted photons are "incoherent". <S> LASER: <S> The degrees of freedom for the photons are removed. <S> The optical cavity allows only one (or very few) wavelengths (factors of the resonator length). <S> And the previously emitted photons "passing by" stimulate the emission of the new photon. <S> So most photons have the same phase and frequency. <S> They are "coherent". <S> Even though the LED already has a very small variation of wavelength the LASER optics reduce that variation. <S> The counter intuitive aspect of a LASER comes from quantum mechanics. <S> You might think that a photon is emitted spontaneously and then would resonate if it has the right wavelength that fits the geometry of the resonator. <S> But due to quantum mechanics the geometry of the LASER-(diode) makes it very unlikely for a photon to be emitted spontaneously or at another wavelength. <A> Diode lasers are kinda cool in that they "violate" a few laser rules: <S> The gain of semiconductors is so great that even though the radius of facets creating the cavity are really high (i.e., essentially flat), it still lases. <S> (The laser equation predicts that infinite gain is necessary for a pair of flat surfaces to lase)! <S> There's a proof that at least three energy levels are needed for a pumped medium to lase, but semiconductor lasers only have two (because they are not optically pumped, but electrically pumped). <A> In order for a LED to be considered a "laser" LED, its design must be such that a certain amount of the light it produces must be reflected back onto itself, by optical (or electrical) means, so that newly created (via stimulation) photons are "in step" with the previous ones, thereby creating a coherent beam of photons. <S> Meeting the S timulated E mission of R adiation requirement, is what makes it a laser !
| A diode laser is an LED in an optical cavity.
|
Alternative to a potentiometer to regulate 50 W I want to regulate the input current into an ultrasound transducer (40 kHz, 50W). The signal is being generated by PCB. I read that regular potentiometers are usually employed up to a power of 1 Watt. I have found potentiometers that could be used, but they seem a bit bulky and expensive (ca. 80 EUR). Is there an alternative to a standard potentiometer to regulate the current up to 50W? <Q> Don't try to scale down the 50 W signal coming out of the driver. <S> Instead, control the driver circuit to produce a lower amplitude signal. <S> This is the same concept as a volume control for audio. <S> There isn't some big fat pot between the power amp output and each speaker. <S> Instead, the volume control adjusts the amplitude of the low power signal going into the amplifier. <S> Show the power driver, and how it is controlled. <S> From that we can probably recommend ways to cause it to make less power. <S> Without those details, there is little to say. <A> <A> Depending on the impedance of the ultrasonic transducer, I have in the past used a signal generator and an audio power amplifier. <S> Some of them do go beyond 40kHz <A> Piezo devices are dielectric resonators and <S> as such the current is proportion to voltage and frequency. <S> If you want to regulate the power, the most convenient way is to have a voltage controlled Vcc to the driver which can be done with a low power pot to an adjustable SMPS or LDO in the feedback ratio.
| You could use a MOSFET as a Voltage Controlled Resistor, as long as you use a large heat sink and perhaps a fan.
|
Converting PWM to analog voltage I need to convert a PWM signal with varying duty cycle from the arduino to an analog voltage from 0-5V. I used a RC filter with a time period of 3.9ms. The PWM output has a period of 2ms. I experienced loading effect at the output and therefore I used an op-amp as a buffer. However, I am still experiencing the same problem. My output is ranging from 1.2-5V. I need the output to vary across the entire range of 0-5V. Can anyone suggest a solution for this? simulate this circuit – Schematic created using CircuitLab The op-amp is LM358N. <Q> I used a RC filter with a time period of 3.9ms. <S> The PWM output has a period of 2ms. <S> You need a longer time period for your filter I would suggest. <S> I would also suggest that you simulate this to see what the output looks like and notice that some artefacts of the PWM signal are still present on the DC level: - The above uses your numbers and yes, adding a load will alter the filter characteristics and give different results (more ripple voltage). <S> If you are looking for a more powerful current delivery system you need a power amplifier or the adddition of a transistor. <A> What supply voltage are you using? <S> If it is a single 5 V supply then don't expect the LM358 to be able to make 0 to 5 V at its output. <S> Especially when the output is loaded, it will be unable to take the output to the supply rails. <S> Either give the LM358 more "headroom" so give it a -2 V (or less) and +8 V (or more) supply. <S> For example, use a -10 V / + 10 V supply. <S> Also with this opamp you cannot load the output too much if you want it to be able to reach the supply rail voltages. <A> You can always make a capacitive voltage booster and then feed that into mosfet's in follower mode. <S> Keep in mind that capacitive voltage boosters are not efficient, they are always less efficient than 50%. <S> It would look something like <S> this : <S> I've made arrows that shows which nodes the graph's represent. <S> The first graph, the one on the left, is the duty cycle of your PWM. <S> The second graph is the PWM the circuit actually sees. <S> The third graph is the filtered output of your PWM. <S> The fourth graph is the output of this circuit, which coincides with the voltage over the first LP filter and the duty cycle of the PWM. <S> If you are actually trying to drive some small motor or something else, then you might as well remove the 1 kΩ at the output. <S> The 1 kHz sawtooth and the 3 V going into the op-amp to generate the PWM is not an actual part of the circuit, that is a model for the PWM coming out from your MCU. <S> The output from the first LP enters a P-MOS that is in a follower configuration. <S> The voltage at its source will be the voltage threshold of the MOSFET, so in order to remove this offset I add a N-MOS as the final stage, also in follower configuration. <S> If the node that says 7.67 V would've been 5 V instead, then the output would only have been able to swing between 0V and 5 V minus the threshold of the N-MOS. <S> In this example the threshold voltage is 1.5 V, so as long as we're feeding it with 6.5 V <S> then the output can swing between 0 V and 5 V. <S> The " Some oscillator " could be a pin on the MCU or some other oscillating circuit. <S> I also added another LP filter for extra filtering. <S> However, you could also just ignore the MOSFET's in follower configuration and just hook up the 7.67 V supply to your LM358 and be done with it. <S> I just wanted to share the MOSFET solution first to show that you don't have to throw op-amps at everything to solve every problem.
| But a much easier solution is to use an opamp with rail-to-rail input and output, for example the MCP6001 . If you have decent bipolar supplies you should be able to use an op-amp but remember, an op-amp can only usually supply loads of up to 20 mA.
|
240 V AC to 5 V DC power supply recommended fuse Admittedly this is quite a novice question but I just wanted to make sure before I start a potential fire as this project is destined for an enclosed space behind a gyproc wall... I picked up a Vigortronix 230 V AC to 5 V DC PSU to power a Pi Zero W and camera. I'd like to put an in-line fuse on the live wire and after looking at the data sheet I'm unsure what size of fuse I should be using. Which figures should one use to calculate this? Also, would it be good practise to place a capacitor on the 5 V output or unneccessary - if so, what size of cap? Lastly, would there be any other recommendations for using this type of power supply? Thanks in advance. <Q> 5 V at 1 A gives an output power of 5 W. Assume <S> it is 50% efficient (conservative) <S> this gives an input power of 10 W. 10 W at 240 V <S> is 10/240 <S> A = 1/24 <S> A any fuse around 0.1A would be enough if you can find one that small. <S> You should not need another capacitor across the output of that supply. <S> In fact some supplies have a maximum allowed capacitance on the output. <A> The fuse is to protect against excessive current, so you need to look at the current rating of the device. <S> The datasheet says the module takes 70 mA continuous at 240 VAC input for 5 W output. <S> You could use a 1 A mains fuse. <S> It also says the inrush current could be up to 25 A, so you need a slow-blow (a.k.a. timed or time-delay) fuse. <S> If the inrush current tends to blow the 1 A fuse, use a 3 A fuse. <S> For maximum peace of mind, find a reputable fuse manufacturer and consult their datasheets with regard to the inrush current. <S> Note that there are fake fuses available from online auction-type websites, so you might want to track down a reputable seller. <S> With regard to a capacitor (or other filtering, e.g. an inductor), check if the output specifications meet the requirements for your circuit. <A> There is a inrush current of 15 to 25A. <S> So you want a slow blowing fuse . <S> The max steady current is 100mA so give it some margin and select one that can break your voltage. <S> You can easily find 250V rated slow blow fuses for various currents in a 5x20mm cylindrical package. <S> That package size is often used as fuses for dimmer output and (old) low voltage lighting transformers and in UK plugs.
| Note that the wire used to connect it must be of at least the same current rating as the fuse so that in a fault condition the fuse will blow cleanly instead of the wire melting messily.
|
GPS QA during manufacturing Testing and integration engineers on electronics.stackexchange How do you quality check if a GPS module and its subsystems are up to the mark during manufacturing? My company recently had issues with few of its devices. Some weren't having a GPS lock (ever) and the issue was found to be the antenna (once replaced it worked fine). But we can't test every device (over 5000) for a GPS lock during manufacturing (or is that the only way?). Is there a standard way to check to make sure the hardware won't let us down? Edit: Can I assume the hardware is fine if I receive a valid data time (a data time other than the default one send by the hardware)? <Q> There is no free lunch here. <S> You need a solid test plan. <S> This is an area well worth investing in, because once you develop a reputation for shipping shoddy products, it's pretty much impossible to shake it. <S> To a large extent, you need to be able to trust your component suppliers, and focus on testing only for the types of errors that occur during the assembly process, including wrong/missing components, poor soldering, etc. <S> But if you are using unreliable suppliers, then you must invest more resources in incoming inspection and test. <S> If you find that your failure rate is low enough, you might consider only spot-checking subsequent batches, or dropping the functional test altogether, but you need to balance that against the risk and cost of the resulting customer returns and dissatisfaction. <A> That is, no contact. <S> I've currently come across 2 cold soldering and 3 no contact issues. <S> Looks to me <S> you have got a hole in your final testing. <S> (I assume you have some sort of test-jig) <S> I can imagine that waiting for a GPS lock requires too much test time <S> but, especially with the detected errors you should add a resistance/contact/SWR test on the antenna. <A> There are few methods to accelerate GPS module assembly testing: <S> Use AGPS to acquire a faster fix. <S> TTFF should take few seconds, assuming you can load the AGPS data to the unit. <S> Read the GPS raw data, specifically, C/n. <S> Use a GPS repeater, or better yet, a GPS constellation simulator to get a known signal. <A> A faulty antenna connector will result in a degraded signal strength. <S> This can vary anywhere from total loss to almost full signal strength. <S> In my experience you can get GPS timestamp with much less signal strength than is needed for position lock, as a single visible satellite is enough to get a (rough) timestamp. <S> So that kind of test would detect a 100% signal loss, but might not detect a 80% loss. <S> Even a full lock test would not detect a 20% loss, which could still mean you fail to meet the specifications you've promised to customer. <A> If you see a valid GPZDA-Sentence (or any other sentence containing UTC), you can be sure that your module was able to decode some of the navigation message. <S> Different vendors may send the first ZDA at different points, some after reception of the first subframe 1 (containing the week-number-field), some only after applying UTC-offsets from the almanac. <S> Subframe 1 is sent every 30 seconds, Subframe 4 Page 18 (containing UTC offsets) is sent every 12 minutes. <S> Be aware: Even an unhealthy module may be able to decode some of the message. <S> In general, testing with live space-segment signals will not give reproducible results. <S> Reasons are the constantly changing constellation and environmental influences (ionospheric conditions, humidity, etc). <S> If you want reproducible results, you should look into a single channel GNSS simulator. <S> A simple test program would start with a strong signal for an initial lock, then fade the signal and record the RF level at loss-of-signal.
| Optimally you'd have a GPS simulator with known transmission strength, and check the received signal strength on each module. But there's no substitute for a full-up functional test at the end, at least on the initial batch. Visual inspection is one method that can be applied here, but there are also automated electrical tests that can be used to find a large majority of the potential problems.
|
Can I use two separate charge controllers (wind and solar) to charge my battery bank? I currently have a 400 watt solar panel array in series with a Epever 24 volt charge controller connected to two twelve volt battery bank in series (24v) with a 24 volt split inverter. I'm planning to incorporate a 24 volt 400 watt wind turbine to this same battery bank and it comes with its own charge controller. Can I connect both charge controllers to the same battery bank? <Q> MPPT charge controllers (and lead acid battery chargers in general) basically act as current sources for the bulk of the charging process, so paralleling them shouldn't have ill effects. <S> Once the battery is nearly full, one might switch to constant voltage and later float charging before the other, but this shouldn't cause any issues. <S> Epever seems to agree with this. <S> Page 8 of user manual: 2 Installation Instructions 2.1 General Installation Notes ... <S> Multiple same models of controllers can be installed in parallel on the same battery bank to achieve higher charging current. <S> Each controller must have its own solar module(s). <S> Another manufacturer even has a whole article about this: https://www.morningstarcorp.com/parallel-charging-using-multiple-controllers-separate-pv-arrays/ <S> While the wind turbine charge controller is clearly not of the same model, I see no reason why they wouldn't play nice together. <S> Epever probably added that distinction because they can't guarantee that their product works when in parallel with every device on the marked. <A> There's no problem with the physics of adding two current sources together to charge a single battery. <S> The problem might come with the control. <S> Battery chargers tend to be programmed to a) charge the battery as quickly as possible b) not overcharge the battery <S> It's easy for two parallel chargers to each limit the voltage to the battery. <S> They will have slightly different thresholds, but that really doesn't matter. <S> The problem comes with current. <S> Perhaps the safest way to parallel two chargers is to allocate each charger 50% of your battery's maximum charge current (or 30/70, or some other fixed fraction). <S> Unfortnuately with a strong wind at night, or a calm sunny day, the maximum charge rate will be limited by your chargers at below the maximum possible for the battery. <S> Ideally you'd like either charger to charge up to the maximum rate of your battery, but limit the total current on a windy sunny day. <S> That means they have to know about each other, and play nice. <S> The feature needs to have been anticipated and included by the controllers' designers. <S> If they don't have digital control, you may be able to do something on the analogue side, but you'd need to know what you were doing. <A> Surely each charger will monitor the battery and see the other charger as a fully charged battery if their outputs are connected together. <S> If you used diode control to stop the one charger from monitoring the other charger, then they will not monitor the battery <S> either so they surely will both shut down in a fault condition?
| I don't see why not, as long as the batteries can handle the worst case charging conditions (both charge controllers delivering maximum current simultaneously). If they are only standalone chargers, but both have a digital port, and can report current and be controlled on the fly, then you can tack on an Arduino or PI, and control the limits yourself.
|
Boost operation in inverting Buck-Boost converter Regarding the inverting BUCK-BOOST shown in figure, I understand how it works as Buck. Please explain how it can act as a BOOST. Because during the off time there is no connection with input supply so i think it cant act as boost. Then why its name as "BUCK-BOOST" <Q> During the off state, the supply is disconnected entirely from the inductor. <S> The inductor's energy is delivered into the load, at a voltage determined entirely by the load. <S> Therefore the load voltage can be lower than the input (buck), or higher than the input (boost), it really doesn't matter. <S> The description buck or boost is not really relevant, as the same topology handles voltage increase or decrease. <S> Ultimately the load voltage is servoed by controlling the power into the load, which is the same (neglecting losses) as controlling the power into the inductor. <S> This is done by controlling the inductor pulse on time, pulse frequency, or both. <A> Because during the off time there is no connection with input supply <S> so i think it cant act as boost. <S> I think you are concentrating too much on the traditional type of boost converter that has a residual connection to Vin and hence the output voltage is Vin plus the voltage delivered by the inductor energy into the load. <S> If you redefine what boost means i.e. it can generate a higher output voltage than the input voltage <S> then you should have no problem. <S> After all, you can make a boost converter that generates <S> +100 volts from a +5 volt supply and the fact that 95 of the 100 volts come from the energy storage and conversion process tells you that you shouldn't get too worried about the input voltage propping-up the output voltage by 5%. <A> The inverting voltage V = <S> L dI/dt in a repetitive mode is created by the pulsed ON time with the FET, or \$dI= <S> V_cΔt_1 <S> / <S> L\$ <S> so the current ramps up with pulse duration when ON. <S> The switch OFF and Diode conducts with a forward voltage drop at that current and the flyback of the inductor creates a negative voltage. <S> The Capacitor voltage charges the initial current stored in inductor energy \$½LI²=(V_c+V_f)*I*Δt_2\$ <S> Since this is a flyback converter, it does not have separate stages for Buck or Boost, rather this inverting voltage depends is from an impedance ratio from charge to discharge. <S> To give you an idea of scale, a pulse into an inductor in my cct shown, I = <S> = <S> 250mA 5V*5us/100uH, can results in a discharge voltage pulse of -1kV stored in the cap. <S> when there is no load. <S> For conversion from +5 to -5V roughly, with ideal parts, a 56% on time for the FET was used here with f= 100kHz, L=100uH, C=5uF, R100 ohm values from -5V <S> the additional boosting effects are created by; Reducing L /3 <S> increases <S> Vout = <S> 1.5 <S> * -5V <S> Increasing R x2 increases <S> Vout = <S> 1.5 <S> * -5V <S> Changing C does not affect Vout except ripple Changing the diode affects <S> Vout <S> due to diode losses Reducing f /2 increases <S> Vout = <S> 1.5 <S> * -5V <S> increasing FET on time by 50% increases (Vout-Vin) x2
| During the on state, the power supply charges the inductor.
|
Operational amplifier bandwidth - why does manufacturer tell it's DC when it is not I've been working with op amp datasheets lately and i noticed strange thing with RF amplifiers specifications.I will provide one example, but there quit more of them with this manufacturer. HMC625BLP5E http://www.analog.com/media/en/technical-documentation/data-sheets/HMC625B.pdf So the description says it's "VARIABLE GAIN AMPLIFIER, DC - 5 GHz" But application schematic's show that DC blocking capacitor is required at the input of the amplifier So my questions are: 1) Why would they say it's DC when it's effectively not.2) What would happen if i would not put that capacitor on the input? <Q> This isn't an opamp (operational amplifier). <S> It isn't even a differential amplifier. <S> This is a radio frequency attenuator and amplifier on a chip. <S> Why would they say it's DC when it's effectively not. <S> Because the inputs of the amplifier and attenuator are DC coupled to the respective outputs. <S> It could theoretically be used for amplifying a signal with a DC component, although in most (probably all) applications you wouldn't want to. <S> The schematic you posted is just a hint, showing a typical application of the chip, it's not the only way to use it. <S> What would happen if i would not put that capacitor on the input? <S> It depends wholly on what you connect to the inputs and outputs. <S> To pass the DC component in the first place you'd have to DC couple AMPOUT too (as well as ATTIN and ATTOUT if you wish to use the attenuator at DC, probably those decoupling pins of the attenuator too). <S> Good luck doing that without messing up all the internal biasing of the amplifier, however. <S> I doubt that it could be done, nor that this is the right tool for the job. <S> It can't really do actual DC under practical terms, but even 1 MHz is functionally DC when compared to 6 GHz. <A> This is more or less a marketing thing. <S> As you've noticed, the amplifier (and many many others) need DC-blocks. <S> They are also biased over an inductor at the output. <S> Any DC-signal would just be absorbed into the supply voltage. <S> The "DC-capable" feature only tells you, that the amplifier itself is not the limiting factor for low frequencies. <S> Many of these RF-amplifiers are basically just a 50 Ohm matched transistor in a common source/emitter circuit. <S> Therefore, the amp is DC capable. <S> Just its standard circuit is not. <S> Theoretically you could achieve amplification at a very low frequency. <S> You just have to use a very large inductor and huge capacitors for AC coupling. <S> Unfortunately, values become unrealistic at some point. <S> Imagine an inductor that blocks 2 Hz... <A> They say it is DC if the DC is within a narrow range. <S> For RF only a simple blocking capacitor solves this issue. <S> As an op-amp though with a wide-band response some manufactures use them for Timer/Counter front ends usually with a 10:1 divider. <S> A servo-loop keeps track of the DC offset by integration <S> so the input always has an average DC value that is correct. <S> Note that not all RF op-amps perform well at DC or even low band AM radio. <S> But they give the spec anyways as a selling point to those who know how to get wide-bandwidth and keep it stable. <S> Think about oscilloscopes with real-time GHZ inputs that MUST use special RF op-amps that are DC stable-if you use an active servo loop to keep it stable.
| I agree that the manufacturers are misleading here.
|
Do PCBs have schematics? I have been studying electronics a bit at home in my spair time. I was wondering if manufacturers make schematics for PCBs? If I could get ahold of something like that i feel that I could understand the flow of the board better. <Q> Yes, all PCBS have schematics. <S> The ones that are public include Open Source Hardware <S> Very simple designs are often based around an example schematicprovided in the data sheet of the primary IC. <S> You can usually obtain the data sheet from the IC manufacturer. <S> Bus Pirate AD603 Data sheet <S> However, as other answers point out, the majority of schematics for PCBs in commercial products will not be available to the public. <A> I'd say most of the PCBs do have a schematic somewhere, and it would greatly improve the understanding of the PCB and the functions of all the parts if it was available. <S> Especially giving all the intended sizes and ratings of the small parts (resistors, capacitors). <S> But it contains the essence of the circuit and quite often many years of experience designing circuits. <S> Most of the time, people don't want to disclose all that information to the public, especially in a commercial product. <S> The layout is another important part however - with multilayer PCBs it is also a bit of a hassle to reproduce that. <S> It was more common in the earlier days where a technician would actually repair the device instead of sending it back to the factory. <S> These service instructions sometimes contained a complete schematic, but sometimes only parts which were supposed to be serviced. <S> Nowadays, some companies propagate Open Hardware, which is basically open source hardware, but there a problem arises as not all formats which are widely used are accessible for a wide variety of people. <S> What is the worth of a schematic published in Altium or Mentor file formats, which cost several thousand dollars (okay some come with a Viewer, but you cannot modify it). <S> And thinking about the projects I'm involved in - <S> a schematic wouldn't help you one bit. <S> It basically consists of a MCU and an ASIC - both of which you will not know what they are doing. <S> The MCU contains some software (which you could get and try to reverse engineer) and the ASIC is doing something which you would have to analyze with a black box approach, also quite difficult. <S> So even with the schematic you might not get much further in understanding a circuit. <A> There are a few exceptions to this, and some will release schematics to approved field repair facilities. <S> You can also try contacting them, but do not be surprised if they tell you no. <S> In many cases having the manufacturers schematic will not help you much. <S> In todays heavily digital and integrated systems most of the functionality is buried in the programmed parts. <S> Many schematics you can't even tell what the thing does from the schematic other than perhaps a clue from the title block. <A> If you are interested in understanding the schematics of PCB, how to reverse engineering is what you are looking for. <S> I bought a cheap webcam that has built-in LED light, dismantled it, and ran a wire from the webcam to the PC. <S> The lens on what I have is adjustable, allowing me to reduce the focal length. <S> The resulting setup is a great "macroscope" to see where the traces are running. <S> EDIT: added one more picture as per pipe and Thomas Weller's advice
| Pretty much all PCB's are based on some form of schematic, even if it was just drawn on the back of a napkin. If you know the manufacturer, and they have support documents on-line you can check their website for a schematic. Most manufacturers do not publish their schematics as they are proprietary designs.
|
How to determine which diode conducts in diode-OR application Say I have two 12V power sources - 12V_A and 12V_B. What happens in the following scenarios: If 12_A = 12_B = 12V. Do both diodes conduct? If 12_A is slightly higher than 12V_B. Does the top diode conduct? Or both? If 12_A = 12V and 12V_B = 0. We know that top diode conducts for sure, but is there also leakage through the bottom diode? <Q> You have to understand diodes are resistors that change value depending on the voltage across them. <S> You should be familiar with the voltage current relationship of a diode. <S> But if you take that graph and plot the slope versus the voltage, you get V/I which is the resistance vs voltage graph, you are likely less familiar with and looks something like this. <S> So, having that in mind, lets look at your cases. <S> In case 1, if they are matched diodes, they will both have the same voltage across them and will both conduct equally. <S> In reality they will differ and whichever diode has the smaller threshold voltage will conduct a lot more. <S> What the bottom one does depends on your definition of slightly. <S> If the right side of the diode is lower voltage than the left it will still conduct but at a higher resistance. <S> If the voltage is reversed it will leak a bit. <S> In case 3, obviously the top diode is conducting, the bottom diode will either present a large resistance and leak some current back to the left, or the voltage will exceed the reverse breakdown voltage and it will conduct back to the left and pull down the output. <A> We can represent your two input voltage sources as: $$V_A = 12$$$$V_B <S> = 12 <S> + V_{\text{offset}}$$ <S> We can then sweep over the value of the offset voltage: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> I think exploring this visually using a quick simulation is a good way to get an overall feel for what's going on, with the offset voltage on the x-axis, and the currents through each diode on the y-axis: <S> There is a region where both diodes are conducting, and there are regions beyond that where one or the other is basically off. <S> In reality, the size of these regions will depend on the diodes and the current level. <S> Even two same-part-number-diodes may not show the current intersection at exactly zero offset voltage due to manufacturing variances. <S> Your third question could be answered by setting $$V_{\text{offset}}=-12$$ <S> but I think you can largely extrapolate that behavior from the graph above. <A> If 12_A = 12_B <S> = 12.00 <S> V. Do both diodes conduct? <S> Partially <S> yes. <S> They would share the current drawn if both diodes had equal Vdrop values, but that is rare. <S> Most likely one diode will dominate the current flow. <S> If 12_A is slightly higher than 12V_B. <S> Does the top diode conduct? <S> Or both? <S> Yes, even if slightly higher that diode will dominate the current flow. <S> If 12_A <S> = 12V and 12V_B = 0. <S> We know that top diode conducts for sure, but is there also leakage through the bottom diode? <S> Under condition where the input (anode) is at ground potential schottky diodes have a higher leakage current than most. <S> If driven by a PNP collector than there is no leakage current. <S> Note that my answers assume a current drain of 1 mA or more. <S> At low uA levels <S> the diodes may not 'turn ON' properly. <A> To first order, diodes conduct if they voltage across them exceeds the threshold voltage (somewhere around 0.4V for the Schottkey diodes shown). <S> That approximation breaks for some of your questions, <S> because the "threshold" isn't an on/off switch - it's a fast exponential rise, and it varies slightly from diode to diode. <S> So:1) <S> A <S> = 12V <S> , B = 12V, Output = (12-0.4 = <S> 11.6V)Here, both diodes will conduct. <S> The voltages across them are equal but, because of part to part variation, one may still conduct significantly more than the other (or, they may be close to equal) <S> 2) <S> A > 12V, B = 12V. <S> Depending on what you mean by "slightly" higher, either A or B could conduct more if the "slightly higher" voltage is still within the part to part variation of the diodes. <S> If by "slightly higher" you mean, say, 0.1V higher, A will conduct much more than B. <S> If A is only 1mV higher than B, you're within the part to part variation and either diode could be conducting more. <S> 3) <S> A = <S> 12V <S> , B = 0V, <S> Out = 11.6V. <S> Yes, there is reverse leakage through B. <S> If your diode is, say, this one , you will get about 0.05uA of reverse leakage. <S> There will definitely not be any current in the forward direction through B. <A> RE: <S> your 2nd & 3rd questions only. <S> (1st should be obvious, yes but rarely perfectly equal due to mfg tolerances if mixed batches.) <S> Here is an example of 2 Schottky power diodes rated for 0.305V @ <S> 1A, so they have much more leakage than small signal diodes and even more than Silicon diodes. <S> I made an artificial slightly smaller voltage of 0.1V between the diodes. <S> The Falstad simulated scope traces here display the max,min values of each signal. <S> The power sources has 0 ohms. <S> What differences in current do you see for forward and reverse? <S> Does it make sense? <S> Have you researched the Early Effect yet? <S> or seen the ΔV <S> /ΔI= <S> Rs( or ESR) slope of diodes in datasheets? <S> Any questions?
| In case 2, the top diode will have the higher voltage across it and will conduct more than the bottom one.
|
Why is it necessary to know a state of a system? In a system why is necessary ? why cant we use a simple transfer funtion in an S or Z domain ,to get an input output relation , or as we already know our input and measure our outputs through sensors ? <Q> For example: a system with a low pass filter of a resistor and a capacitor will respond differently if the capacitor is fully charged or if the capacitor starts with zero voltage. <S> The system will respond differently depending on the initial conditions or state <S> It isn't simply enough to know how a system will respond, but you also need to know what state it is in now to determine it's future behavior <S> Another example of this is useful for analyzing the circuit. <S> If you zero out the state and only consider the input this is called "Zero-State" analysis. <S> Conversely, if you zero out the input and supply a state, it's called "Zero-Input" analysis. <S> Source: <S> Dummies Find the response <A> Sometimes we have systems where a transfer function representation, called an external description, is insufficient to completely characterize it. <S> As an example consider the circuit below, taken for the most part from Lathi's Principles of Linear Systems and Signals. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Where out input is x(t) on the left and let's take our ouput across the two resistors on the right. <S> Let's also put some initial charge across the capacitor, which results in currents due to it. <S> As the system is linear we can find the total response as the sum of the zero-input and zero-state responses. <S> To find the zero-input response we would set x(t) to zero and compute currents due to the capacitor, if you do this you will find it has no effect on the output y(t). <S> The zero-state response can be found by setting the initial capacitor charge to zero and doing the circuit analysis. <S> You will find that the problem reduces to a simple voltage divider. <S> I have skipped the actual analysis as the important point here is that no external measurement of the output and input can possibly tell you what the current through the capacitor is. <S> This is for the most part a toy problem <S> but it illustrates that the state of a system is not fully specified by its transfer function. <S> In some cases this could be very important as something dangerous or unstable may be happening inside the system. <S> Additionally, transfer functions are limited to systems with a single input and a single output. <S> If you are interested, look up controlability and observability of systems. <S> This will give you some more detail about when it is important to have a state, or internal, description of a system. <A> 1) when you consider a state feedback controller 2) when linear system theory can not be applied ( nonlinear systems) <S> 3)when you don‘t have a sensor for the state you are interested in (estimators) 4) when you have to optimize a state dependent objective function <S> 5) design of tracking applications 6) need to consider initial conditions 7) <S> complex problems are easier to handle in the time domain ... <A> Disclaimer : I'm not sure I understand your question, but I'll try conceptually explaining my interpretation of your question: why do we measure the state of systems in a controls problem? <S> Controls is fundamentally about changing states, so how can you change something's state or behavior if you don't know how it currently behaves? <S> For example, you act as a controller when you watch TV. <S> You sit down to watch TV, but you see (eyes = sensor) that the TV (plant) is "OFF" (state). <S> You press (actuation or control input) <S> the power button on your remote to switch the TV "ON" (change in state). <S> How could you have activated the TV without knowing its current state with your eyes? <S> You can have the most complex transfer function that relates the depth of the remote control's buttons to the strength of the remote's IR beam to ... to the pixel intensity of each pixel, but all of that is useless if you don't know the current state of the buttons or TV! <S> Was the TV already ON or OFF? <S> Was the power button previously pressed? <S> If the TV is ON <S> and you just pressed the power button, then you just turned OFF the TV! <S> Even the most advanced robust and optimal digital controller with Kalman filtering needs to sample (observe) a system every now and then with some sensors to get an idea of what's going on. <S> We assume many simplifications and ignore many complex processes when developing control algorithms, so we can rarely make an open-loop controller that can handle 100% of the requested tasks. <S> For example, let's say you had the exact transfer function for controlling the position of an RC car from a joystick's movements, even including the mechanical transients of the wheels. <S> You close your eyes (no sensors, no knowledge of the state), and you start giving different inputs. <S> Suddenly, a little kid comes in, picks up the car, and moves it somewhere else. <S> Is the car where your transfer function says it is if an external actor disturbed your perfect math? <S> You need some sort of feedback on the actual state.
| If the system has memory, you need to know what state the memory is in.
|
How to calculate how long a supercap can provide power What's the formula to calculate how many seconds a supercapacitor can provide power when employing a buck/boost converter? Also, how different would that calculation be when using a pair of supercaps in serie (eg. 2x 2.7V @ 1F) Example data: Supercap :5.5V, 1F;Panasonic EEC-S5R5V105 http://www.mouser.com/ds/2/315/ABC0000C22-947554.pdf Buck/Boost (5V out): XL6009, 94% efficiency; are there other relevant specs? https://www.pollin.de/productdownloads/D351434D.PDF Load: 5V, 250mA (Raspberry Pi) (intended application: to provide a few seconds to save settings at power loss for an embedded RasPi) <Q> Hold up time is T= <S> \$\frac{C(V_s - V_f)}{I}\$ <S> where I is the current, C is the capacitance, Vs is initial voltage on the capacitor, Vf is final voltage on the capacitor (perhaps the minimum voltage at which the system will work). <S> That's for an ideal capacitor. <S> If the capacitor has significant internal resistance the voltage will drop an additional amount <S> I*R, so the hold up time will be reduced. <S> For a non-ideal capacitor, also adjust I to add the internal leakage current. <S> If you're trying to hold up a RPi long enough for an orderly shutdown <S> I think you're going to require a very large supercapacitor with low internal resistance or a battery. <A> If your buck mode is most efficient you should go from higher voltage to lower. <S> If boost is most efficient just the opposite. <S> Check the voltage inside the PI. <S> I think internally it only needs 3V3. <S> At least 90% runs of 3V3 if I remember correctly. <S> Thus you are having 5V capacitors which discharge. <S> You boost to 5V which then inside the PI gets converted back to 3V3: not ideal! <S> For Linux shutdown you can ignore everything which is 5V anyway as only the CPU and DRAM need to keep working and that is all 3V3. <S> Oh! <S> and the SDCARD, also 3V3. <S> Post edit. <S> Assuming the CPU etc. <S> need 3.3V, add internal Pi regulator voltage drop <S> ~200mV. <S> Use your 2 seconds run time: Then you external voltage can drop from 5V to 3.5V in two seconds. <S> Using @ <S> Spehro Pefhany formula gives you ~0.33F without need for a buck/boost converter. <S> I would take one a bit bigger as we used a number of estimated values. <S> Be aware that when you switch the 5V supply on, those capacitors will need to charge and look almost like a short circuit for a while. <S> Your 5V supply might not like that. <S> You can work around that by adding an R plus parallel diode in series with the cap, but that gives an additional voltage drop which you have to compensate for. <A> <A> Well, 2 coin caps parallel should providea) better output currentb) <S> double the capacityfor the pi zero. <S> I have used two 5.5V 4 F caps parallel but it can be bit of overkill while it still remains quite compact. <S> Anyway if you put a converter in there spacewise you might be able cram two (even smaller) caps together in small space.
| I found an site calculating super capacitor discharge time that might be helpful: http://www.circuits.dk/calculator_capacitor_discharge.htm
|
What's the impact of a crystal's frequency stability, ESR and load capacitance on micro-controllers? I'm choosing a crystal for MK20DX256VLH7, but I can't seem to find enough information in MK20's datasheet for the oscillator. So, what I'm interested in knowing is: What is the impact of crystals' frequency stability, ESR, and load capacitance on the performance of the MCU? Would it be acceptable to use 20 ppm, 150 Ohm, and 8 pF values? The other question on my mind is: By what margin I can deviate from these specs without affecting the performance of the MCU? Can I choose 18 or 15 ppm? Can I reduce the ESR down to 40 Ohms? Can I select load capacitance of 4 pF? <Q> It would probably be better for you to choose a crystal oscillator rather than a crystal . <S> You just buy it and it works, and is guaranteed to start and conform to stated specifications. <S> In the datasheet they recommend you to refer to the crystal manufacturer for information. <S> The crystal manufacturer will likely be similarly unhelpful and point you back to the IC maker. <S> You should get the load capacitance correct. <S> Refer to the formulas for calculating it, which I won't repeat here <S> , it's not simply the stated load capacitance stated for the crystal, it's double that minus <S> whatever capacitance is built into the chip or exists parasitically. <S> If you exceed the maximum drive power you can cause early aging or even failure of the crystal. <S> This is more of a potential issue with smaller SMT crystals which have maximum drive power of 100uW or less compared to mW for HC-49 style crystals. <S> Lower ESR is better. <S> Some crystal manufacturers recommend you measure typical drive power, which requires special FET probes. <S> You can determine an upper limit to the drive power from the ESR, but that may be too constraining. <S> Low frequency tuning fork crystals (like the typically 32.768kHz crystal which your chip can use for one of the oscillators) usually require a series resistor to limit the drive power. <S> As to whether frequency drift and initial accuracy is a potential problem for you, only you know to what use you are putting the chip, so this is entirely for you to evaluate from a system point of view. <A> The crystal in combination with the circuit on the microcontroller forms a crystal oscillator. <S> The function of this circuit is to provide a clock for the microcontroller. <S> You could also use Spehro's Suggestion and use an external crystal oscillator. <S> That combines a crystal and a circuit containing everything that's needed to make that clock signal. <S> It might be slightly cheaper to use a crystal instead of the crystal oscillator. <S> However, you should follow the recommendations of the microcontroller's datasheet regarding that crystal, there mainly frequency is important. <S> You should also follow the recommendations of the crystal manufacturer's datasheet, there the load capacitance is important. <S> It is not difficult to get this "right" but get it wrong <S> and it just won't work and that will be a pain. <S> Also a parameter like the 20ppm accuracy is often irrelevant as crystals are by themselves already very accurate. <S> Also the microcontroller itself doesn't care about accuracy, it would still work even if the clock is extremely inaccurate and varying over temperature and whatnot. <A> If you are concerned about time and frequency sensitivity and you don't know enough about XTAL OSC design, then you definitely want a Crystal Clock chip. <S> They are cheap, relable and have a variety of choices on stability and tolerance. <S> I would choose a $1~2 (1k) <S> TCXO chip Choose from desired ppm over temperature. <S> 10 ppm, 5, 3,2.5,2,1 ppm. <S> example <S> There may be volumes of books on how to make a 1 ppm TCXO by now, but <S> circa mid 90's we made one for about $1 using special jigs to instant sort xtals and varicaps for tempco and C(V1/V2) using 25 cent crystals and <S> a 3rd order algorithm I created for HC11 controlled 928 MHz Tx synth. <A> Many oscillators for MCUs are designed in a similar way. <S> So you might find more information on another data sheet preferably from the same manufacturer. <S> You can also look for application rapports concerning CPU clocks. <S> I found that even if capacitors are omitted the MCU works fine. <S> Perhaps because there are enough stray capacitances around – these usually are in pF range. <S> But my circuits are for teaching and I can simplify a lot of things with no penalty to save time and components. <S> But If I design for a market then I try to follow advice just to be sure.
| If you are using a crystal then you need to consider drive power, load capacitance, accuracy as well as other things.
|
Can power be distributed to USB devices from central supply? I own some inexpensive yi home cameras. I don't like having a cable run to a wall wart for each camera. Is it possible to run low voltage wires in the walls to a central 5v power supply with sufficient wattage, and splice to the VCC and GND wires in USB cables? Or does power over USB require communication? <Q> Many require smart controllers for battery chargers, but not cams or mobo/mobile part. <S> I prefer a dumb 6 port USB hub with 2.4A per port. <S> (60W max) <S> cheap. <S> So some peripherals need smart data for charger part but not to operate from external power. <S> There is a wide range of options or you can route suitable phone wires from your 5V ATX supply with a polyfuse to each plug. <A> According to the USB standard, more than a certain amount of power over USB does require communication, but in practice it may not be necessary. <S> Battery-powered devices like phones will often draw a maximum of 500mA from a USB port by default, but will require some kind of “communication” to draw more than that. <S> Often a very simple form of communication will suffice, such as Apple’s voltage divider scheme . <S> Devices that don’t charge an internal battery can’t easily vary the amount of current they draw. <S> These devices will often not use any form of communication, especially if there is no host present, and will simply draw as much current as they need. <S> I have a Yi camera myself and once took it apart, and would guess this is the case for these cameras. <S> So what you are planning will likely work. <S> Beware <S> that very long cables will dissipate power in the cable and cause a voltage drop at the device end. <S> Depending on your particular device, this could cause problems including random resets. <A> Power over VBUS doesn't require any USB communication with USB protocol. <S> However, optimal power for your cameras might require proper "charger signature" on D+/D- wires to inform your device about source power capability (which is a sort of communication too). <S> You should do some research on the charger/adapter that is supplied with your YI cameras, and provide the same D+/D- hook-up on your split cable harness. <S> Your camera needs 5 V at 1 A DC power, so the signature is likely simple, a DC-type. <S> I would guess that Chinese charger signature (D+ shorted with D-) is most likely, so you should connect green-white pairs in your cables, preferably individually.
| If no resistors are connected or D+ is not shorted with D-, then the split cable is good to go as it is, VBUS and Ground.
|
PWM Dimming 24v with 3.3v input I am trying to dim a 24v led-strip with a 3.3v PWM-signal coming out of a ESP32. Currently I have this circuit: But there is one problem. It works for dimming till ~40%. But when I turn the PWM off, the LED-strip keeps on at the ~40% brightness. When removing the connection between U1 and Q1 the LED-strip stays on also (~40%). U1 was necessary because otherwise the PWM signal at 1.0 duty-cycle wasn't enough to fully turn the LED-strip on. It barely turned on actually. (LED strip activates at around 18v) I'm not the best engineer when it comes to circuits, so excuse me if I am making a dumb mistake. I would like to hear your thoughts and suggestions. <Q> The 74HCT595 has an enable input on pin 19. <S> It look like you have left this pin floating (disconnected), meaning that as a TTL input it will float high and thus your outputs will be disabled. <S> You can either connect the pin directly to GND, or use a pull down resistor (10k would be a suitable value). <A> Your schematic does not show any decoupling capacitors on the ESP32 or the buffer IC. <S> If these are not present, I suggest adding them. <S> As a minimum, a100n capacitor for each device as close to their respective power pins as possible, plus a 10u-100u capacitor close to both for bulk decoupling. <S> The behavior you describe could be a result of spikes/noise on the power rails affecting the operation of either device. <A> I feel so ashamed at the moment. <S> Turned out that my FET was damaged and didn't function correctly. <S> Swapping it with a new one solved my problem... <S> Anyway, thank you so much for your ideas and suggestions guys!
| You need to connect this pin to GND in order to enable the outputs.
|
Shifting a 2.7V digital signal to Arduino logical levels? I'm pretty much a beginner in electronics and I'm currently working on a personal project, where I have a device, that is putting out a digital signal (square wave) that goes from 0V - 2.7V and I need to read that signal with my Arduino. That, unfortunatelly, isn't enough for the Arduino Mega 2560 since the minimum voltage to turn the digital pin high is at least 3V. I've been doing some "research" and came across the MC14504B hex level shifter which seemed like the perfect solution for my problem. However... I'm having some trouble interpreting the datasheet... TL;DR: I need to level shift my 2.7V signal to at least 3V or more. This is the logic diagram of the level shifter: And this are the characteristics: I'm not exactly sure how to interpret these numbers. I plan to use the TTL-CMOS mode. From what I can tell, as long as input is considered 1 (high) my voltage at output will be ~5V if Vdd is 5V, which is perfect. Would a 3.3V Vdd be okay since Arduino needs at least 3V to turn a pin high? Now to my real question... I don't get the Vcc and Vin (Vol, Voh) part. From the table, we can see, that if Vcc is 5V and Vdd is 10V, the Vin will be a logical 0 if the voltage applied to the input is <= 0.8V, same goes for if Vcc is 5V and Vdd is 15V. Now, from what I can tell, the input will be considered high if at least 2V or more is applied to the input when Vcc = 5V and Vdd = 10V/15V, but both the Voh and Vol change depending on the Vdd? What does this mean for my use case? What if I use 5V for Vcc and Vdd both? What if I use 3.3V for Vcc and Vdd both? What if I use 3.3V for Vcc and 5V for Vdd and vice-versa. What happens in these scenarios? Could someone explain this in a very simple way please? I seem to be missing something here as my interpretation doesn't make sense to me. Thank you! <Q> Arduino inputs must meet the specified logic levels for margin. <S> VIL <= <S> 0.3Vcc <S> max VIH <S> >= <S> 0.7Vcc <S> min <S> Thus the input square wave must be > <S> = 0.4Vcc and for 5V , or 0.4 * 5 = 2.0 Vpp <S> and you have 2.7V with 0.7V margin. <S> Alternatively you may AC couple to input with R bias to Vcc/2. <S> There are lots of level shifter solutions for 2.7V to 5V. <S> Rev B. simulate this circuit – <S> Schematic created using <S> CircuitLab assuming noise free supply and signal. <A> simulate this circuit – <S> Schematic created using CircuitLab Sensor output voltages above about 0.8 V switch Q1 on. <S> R2 pulls the Arduino input up to its supply rail when Q1 is off. <S> If you use an I/O pin input with a programmable pull-up resistor, you can omit R2. <S> Most small-signal NPN BJTs will do for Q1. <A> TL;DR: <S> Use Vcc = 5V, Vdd = 5V, TTL-CMOS mode, and you should be fine. <S> "From what I can tell, as long as input is considered 1 (high) <S> my voltage at output will be ~5V <S> if Vdd is 5V, which is perfect. <S> Would a 3.3V <S> Vdd be okay since Arduino needs at least 3V to turn a pin high?" <S> Correct, you will get ~5V output if you use Vdd <S> = 5V. <S> However, in TTL-CMOS mode, Vdd and Vcc must both be at least 5V <S> (Figure 4 of the datasheet). <S> Since the input logic switchpoint is 1.5V for Vcc = <S> Vdd = 5V, that will work totally fine with your 2.7V logic input. <S> "Now to my real question... I don't get the Vcc and Vin (Vol, Voh) part." <S> This datasheet lists its data in a pretty odd way, and it's not actually totally clear what it means. <S> My interpretation is that "VOL = 1.0VDC" means when operating in this condition, the output voltage is guaranteed to be less than 1VDC. <S> Fortunately, I don't think it's really an issue for your application. <S> "input will be considered high if at least 2V or more is applied to the input when Vcc = 5V and <S> Vdd = 10V/15V, <S> but both the Voh and Vol change depending on the Vdd? <S> What does this mean for my use case? <S> " <S> Yes, you are interpreting this correctly. <S> For your use case, ignore the "Voh and Vol" numbers in the "Input Voltage" section and instead pay more attention to the top-most section labelled "Output voltage", which just says that if you use Vdd = 5V you'll get ~5V output. <S> "What if I use 5V for Vcc and Vdd both? <S> What if I use 3.3V for Vcc and Vdd both? <S> What if I use 3.3V for Vcc and 5V for Vdd and vice-versa." <S> Again, see Figure 4. <S> In TTL-CMOS mode, you need to use 5V for Vcc and Vdd. <S> I would say using 5V for both is the correct solution for your application. <A> TTL input voltage levels are >= <S> 0.8V low and > <S> =2.0V high. <S> The MC14504B accepts these logic levels when in TTL mode with Vcc <S> = +5V. <S> Your signal levels are 0V and 2.7V, so it's all good. <S> The MC14504B has CMOS outputs which go from 0V to Vdd. <S> The Arduino works at 5V <S> so you should also set Vdd to +5V.
| So long as you don't mind a logic inversion, you can use something simple like a transistor and two resistors.
|
DC-DC boost converter voltage spike at power on, popping sound I'm using a 3.3V DC-DC boost converter ( https://www.torexsemi.com/file/xcl101/XCL101.pdf ) as the power source for my audio circuit. There is a popping sound (headphone output) that happens when the circuit is turned on. There is no audible noise after power up. I viewed the boost converter's output on my scope at power up, and there's an AC-coupled voltage spike of about 2.5V that quickly decays back to zero: I'm using 10 uF capacitors at the input and output of the boost converter, as shown in the datasheet: How can I eliminate this AC-coupled voltage transient that's creating the popping sound? Edit: I don't have any popping noise issue when using a bench power supply. It's only a problem using the boost converter. This is my audio circuit: I put the 1k resistor at the output in an attempt to eliminate the popping sound, as a discharge resistor. There is a large decoupling capacitance at the output because I needed a low cut off frequency. The op amps are power op amps that have the required current capability. I tried using a different 3.3V booster circuit, and did not hear the pop sound. The booster IC is different, and the circuit includes a diode and 68 uF output cap: Datasheet of 3.3V booster: https://cdn.sparkfun.com/datasheets/BreakoutBoards/NCP1402.pdf <Q> The voltage spike you're seeing is because you're switching the power on. <S> When you go from 0 volts to 3.3v in a couple of ms, that will appear as AC when your scope is set to AC. <S> The issue is not your power supply, the issue is your audio driver that doesn't compensate for startup pops. <S> Edit: <S> One way might be to follow this suggestion like so simulate this circuit – <S> Schematic created using CircuitLab R2 charges C2 up slowly, gradually turning on the mosfet over a couple of milliseconds, reducing the volume of the pop. <A> Looking at your power supply, Cl is only 10 uF, which allows a fast rise time of the output voltage. <S> To dampen that 'pulse' you can add a series resistor of 5 to 10 ohms at 3 to 5 watts, and another capacitor of 470 uF to ground <S> so you have a heavy duty power filter. <S> It may not be wise to replace Cl with a high value capacitor, as it may become unstable, which is why I isolated them with a resistor. <A> In this case you will not get away with just more capacitance here and there. <S> You need to specifically enable th7ngs after voltage is stable. <S> Most straightforward option is to disconnect the speaker until everything stabilize and then to connect it. <S> You can do it with mosfet or ssr. <S> Maybe using amplifier with "enable" input could also help. <S> I recall that audio amplifiers usually suppress those pops by themselves. <A> @C_Elegans answer is correct, this turn-on pop is caused by the coupling capacitors at the output charging up. <S> Here is another way to eliminate turn-on pop. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> When the power is turned on, the voltage across C1 is 0 (it is discharged by R3 when the power is off), so current is flowing through D1 and the opamp will output 0. <S> As C1 charges through R5 and R6, the opamp will assume it's steady-state output of +V/2. <S> Choose C1R5 to prevent a 'pop' from being heard (i.e. much more than 8ms).
| You'll need to select an op amp that can handle the amount of power that you're putting into your speaker.
|
Vector table relocation in bootloader application I've written a bootloader application for NXP Kinetis microcontroller. Following are the things I did to do the same:1. Created a bootloader application in CFlash addresses 0x0000 to 0x80002. Created my main application code from addresses 0x8000 to 0x1FFFF This code is working fine. Now my doubt is, I have ISRs placed in both bootloader as well as main application code and didn't use any ISR vector relocation. Is it necessary to relocate the vector tables in main application? PS: I may not be facing the issue just because the ISRs in both the apps are same. <Q> First you modify the linker script to "move the flash" a little bit higher (eg. 0x8000 <S> in your case), this will also mean that the new vector table starts at 0x8000, unless you remove it all together and place it in RAM - but that is a slightly different topic (remember about the 256-byte alignment requirement for the NVIC in that case). <S> I usually use a piece of code like this to jump to the application: #define APPLICATION_BASE 0x8000__attribute__((noreturn)) <S> static void start_application(void){ SCB->VTOR = APPLICATION_BASE; //move <S> interrupt vector base <S> typedef void (*function_void_t)(void); #define avuint32(avar) <S> (*((volatile uint32_t *) (avar))) <S> function_void_t application; application = (function_void_t)(avuint32(APPLICATION_BASE+4)); __DSB(); application(); __builtin_unreachable();} The "jump" is offset by 4 because the first word in the blob is the initial stack pointer, the second word is the reset vector of the application. <S> This code will make the application use the stack of the bootloader, so you may have to tweak it a little (some bytes of the stack will be lost because they were used by the bootloader). <A> You do not need to relocate the vector table--the toolchain should correctly locate your vector table in your binary--but you do need to tell the MCU where the correct vector table is located, as well as where the stack pointer should start, when you jump from bootloader to application. <S> By default, on ARM cores, the vector table is found at the very beginning of your application binary. <S> The first entry will be the initial stack pointer, and the second entry will be the reset vector (the application entry point). <S> The rest of the entries are defined by the specific ARM architecture as well as the specific implementation. <S> At startup or a hardware reset, the hardware will initialize the Vector Table Offset Register to 0x00000000, set the stack pointer to the first value in the vector table, and then jump to the location given in the second entry in the table. <S> But when you jump from a bootloader to the application, you have to do each of those things yourself. <A> The last step your bootloader must do before passing control to your application's start address is to initialize the LR register and then point the MCU's vector table to the application's vector table.
| Yes - you have to relocate your vector table, otherwise the interrupts will fire handlers of the bootloader, which will almost certainly lead to a crash.
|
Using capacitors for radio communication I am using a NRF24L01+ 2.4Ghz radio transmitter to talk between Arduinos. I was having issues with them frequently cutting out for short periods of times. I noticed that when I added 100uF capacitors to the power pins of the NRF, I had almost no packet drops. I added 100uF tantalum capacitors to my circuit board design, but when I plugged in the NRFs, I was still getting the same issue as before (albeit slightly better.) When I soldered the capacitor onto the leads manually, I was using an aluminum electrolytic capacitor. The NRF is connected to its own dedicated 3.3V LDO regulator that can supply 500ma of current, so power is not an issue. I heard that these NRFs are incredibly sensitive to voltage noise, so adding capacitors is good for them. But my suspicion right now is that for this case, aluminum electrolytic capacitors are better for this purpose than tantalum (and cheaper.) Am I right in this assumption? Also, if I am trying to get the smoothest voltage to the NRF, what is the best capacitor setup? Should I do one big 100 uF aluminum electrolytic, or should I do a 100uF aluminum and a 0.1 uF ceramic capacitor in parallel? <Q> Do you realize a general purpose 100uF can have an ESR of 2 Ohms ? <S> while a LOW ESR 100uF will be < 10us or 0.1 Ohm. <S> It is always important to know ( by testing ) <S> the sensitivity to supply ripple for any RF radio when you consider the sensitivity threshold is in uV range. <S> Even if the Rx current is only 10mA and the LDO output impedance is only 0.1 Ohm at some high ripple frequency, you need to understand that load regulation is frequency sensitivy as loop gain in the LDO drops with rising f. <S> The parallel cap ESR must be very low for this RC attenuation or better to use LC decoupling to get a 2nd order effect. <S> What I would do is inject noise with a sine wave FM sweep and find the threshold for loss in Rx sensitivity at the minimum RF level. <S> This can be done by using a voltage FM sweep gen with a resistor divider and measure the energy with a spectrum anlyzer AC coupled into 50 Ohms. <S> Then you can measure the RX load current spectrum during Tx data using a 1 Ohm ground shunt resistor and AC couple into the 50 Ohm Spectrum Analyzer. <S> Once you know the ESR of the present system , Ripple current and the Rx noise sensitivity, the choice of low ESR Caps with optional low ESR series L, you can design the transfer function of your power LPF to get no change in x uV <S> Rx sensitivity threshold and thus no dropout and no loss in range or rise in BER. <S> This is a routine operation for any RF designer, unless they know from experience and design the right filter and get it right 1st . <S> RF Beads will help from induced RF noise as well as paying attention to all other sources of interference or BER degradation from crosstalk. <S> Once you know this ripple sensitivity threshold vs f. <S> then you can design your LDO LPF filter to ensure load ripple and verify it. <S> Get the right tools and understand 1/Zo load regulation error with ESR and SRF of filter parts vs f helps save time in debugging Radio issues so you can look at other causes like group delay error , antenna mismatch, PLL performance, xtal error etc. <A> For valid comparisons, the two types of capacitors must connect to the exactsame point in the circuit. <S> Also the ringing frequency depends on inductance; 4" wire in air, not over a plane of any type, is 100mm long and thus approximately 100nanoHenries inductance. <S> That much inductance, with 100uF capacitor, resonate at <S> 1/sqrt( L * C) in radians/second 1/sqrt( 100nH * 100uF) <S> = <S> 1/ <S> sqrt(1e-7 <S> * 1e-4 <S> ) = 1/sqrt(1e-11) <S> = <S> 1/sqrt(0.1 * 1e-10) = sqrt(10) <S> / sqrt(1e-10 <S> ) = 3.16 / 1e-5 = 316,000 rad <S> /sec ~~ 50,000 Hertz ringing <S> The optimum dampening Resistor (Q ~ 1) is sqrt(L / C) = <S> sqrt(0.001) <S> = 31 milliOhms, for wire length of 4" <A> Your problem with the NRF24L01 modules is not "smooth" power, and it isn't noise. <S> Your problem is that you are most likely using a plugin connector to attach the module to your own board. <S> Your description is pretty clear: A capacitor on your board helps, but not enough. <S> There's a couple of things going on: <S> The plugin connectors have a little resistance. <S> When the module starts transmitting, the voltage on the module drops (though the voltage on your board stays stable.) <S> The voltage on the module drops because of the resistance of the connector. <S> The connector acts like an inductor, and prevents current from flowing into the module "fast enough" when the transmitter kicks in. <S> As you've noticed, putting a capacitor right on the NRF24L01 module helps. <S> The problem occurs on the NRF24L01, after the connector. <S> Fixing it on your board won't work. <S> You have a couple of ways to fix it: <S> Remove the connector. <S> Solder the NRF24L01 to your board using a pin header of the proper length and spacing. <S> Use a low equivalent series resistance (ESR) capacitor on your board (right close to the module power pins,) <S> and that should fix it. <S> Tantalum is (relatively) low ESR, but there are aluminum electrolytics that are lower still. <S> Install a big capacitor with low ESR directly on the NRF24L01 module as you have been doing. <S> You probably ought to stick with aluminum electrolytics. <S> Appropriate aluminum models are better for your use than tantalum capacitors. <S> In any case, it is a good idea to include a 100nF ceramic in parallel with the larger electrolytic. <S> Electrolytic capacitors can have a fairly large inductance, which can slow them down on sudden load changes. <S> Ceramic capacitors have very low resistance and very low inductance, so they can deliver that sudden burst when the transmitter kicks in. <S> To get that fast reaction, you need a small ceramic - it can deliver a fast, sharp burst. <S> A small ceramic has a small capacitance, though, so it can't provide current for very long. <S> So, use a small ceramic for the fast stuff together with a larger (low ESR) electrolytic.
| A capacitor directly on the power pins of the NRF24L01 fixes the problems almost perfectly. Tantalums also have a failure mode ("vent with flames") that makes some people reluctant to use them.
|
What drives the advancement towards ever faster cellular network speeds? I've always accepted that technology advances. Being born in the 90s, everything just becomes faster, smaller, cheaper and generally better if you wait a few years. This was most obvious with consumer electronics such as TVs, PCs and cellphones. However, it occurs to me now that I know what drives most of this changes, except for one. Computers and cellphones get better and faster mainly because we are able to build smaller and more efficient transistors (I hear about twice the transistor count per unit of silicon area every two years). The Internet got faster first with DSL which pushed the bandwidth of landline copper twisted pair to its maximum. When we ran out of usable spectrum inside the copper wire we turned to optic fiber, and it was a whole new game. TL;DR: But, what is it that makes it possible for cellular networks to keep getting faster? I've had 2G, 3G and now LTE cellphones and the speed differences are astronomical, akin to the differences observed in household internet in the last decade. Yet, LTE channels don't necessarily have a bigger bandwidth (in fact, I believe LTE uses less: 3G uses 5 MHz channels , whereas LTE can have smaller channels, from 1.4 to 20 MHz ). Moreover, I've heard many times that LTE is more efficient in terms of bps per channel Hz (I would add 'citation needed' here, I'll be the first to admit that it at least sounds dubious). So what is it? Just more spectrum? Better and smaller electronics? Or are we getting better at this in other ways? How so? <Q> what is it that makes it possible for cellular networks to keep getting faster <S> Basically, good old Moore's law. <S> The handset is only half the equation. <S> More modern and powerful silicon does help in getting better channel quality, less noise, etc. <S> However this can't go above the channel bandwidth as per Mr. Shannon. <S> A simple way to boost the bandwidth available to each user is therefore to slice the landscape into smaller cells. <S> Directional antennas on top of towers slice the "round" cell into quarters, like an orange. <S> Installing lots of micro/picocells everywhere in densely populated areas means each base station only handles a smaller number of users. <S> Less users per cell means more bandwidth per user. <S> This is enabled by reducing the price of base station hardware (ie, cheap silicon, Moore's Law, and MMICs <S> which integrate the RF bits on-chip). <S> A smarter system also helps. <S> For example, in GSM, even when you don't talk, your bandwidth time slot is reserved for you, which is wasteful. <S> An important thing is also the availability of these at a reasonable price: Big FPGAs with truly insane computation power Fast ADCs/DACs Microwave ICs <S> These enable digital radio, and this is where the juicy bits are, like MIMO and adaptive antenna arrays with real-time beamforming and channel equalization, advanced (and adaptive) modulations, plus strong error-correction codes which require lots of computing power, etc. <A> I think following are some of the key technologies/techniques driving up cellular data rates. <S> Move to higher carrier frequencies where wider bandwidths are available. <S> Soon we will have millimeter wave technology being used in cellular. <S> Multi Input <S> Multi Output <S> (MIMO) Antenna systems allowing parallel transmission of data streams. <S> Advance modulation schemes such as OFDM and QAM. <S> Shrinking cell sizes. <S> Now we have the same frequency divided among a smaller number of users. <A> Assuming the same bandwidth, the only way to boost datarates is better coding: <S> QAM versus GSM's MSK, 16QAM versus QAM, 256QAM versus 16QAM, <S> And in all this, multipathing and fading must be handled. <S> With more bits per Hertz, the SignalNoiseRatio (SNR) needs to improve, tho coding provides a one-time 5 or 10 dB assist here. <S> To improve SNR, the link needs more ERP (focused TX antennas), higher-gain receiver antennas (more elements, phased arrays, etc giving more area to gather more energy) and shorter paths to reduce pathloss. <A> Or are we getting better at this in other ways? <S> How so? <S> There will possibly come a day when our handsets (or the system) will be able to store the mathematical nuances of our individual voices and manipulate it to form other words algorithmically. <S> Then all that needs to be transmitted in a voice call is "text" and the receiving phone can reconstruct our voices and sound like the actual person. <S> So to say "have a nice day" would take 15 ascii characters or 120 bits for two seconds of speech. <A> Another critical advancement that hasn't been mentioned is improved utilization of optical fibre networks . <S> An optical fiber can carry an entire spectrum of wavelengths. <S> They haven't always done so, however. <S> Optical filters of increasing precision now allow dozens (or more) "channels" to now be crammed into single fibers where previously they would have only been using two. <S> This lets <S> existing infrastructure (fiber in the ground) carry increasing amounts of data with only the need to upgrade the endpoint equipment. <S> This is similar, in some ways, to how the POTS copper went from 2400bps to 50MBps in the span of a few decades. <A> Not only are designers still coming up with better algorithms to do dynamic audio compression, dynamic channel coding (i.e getting closer to Shannon's limit), and dynamic adaptation to multipath, clutter, and interferers; but as transistors get smaller, we can use more elaborate algorithms for the same amount of battery energy.
| Cellular networks basically sit on top of fiber backbones, so better and faster fiber is a critical part of broader, faster cellular. Stronger forward error correction codes not requiring re-transmissions and bringing us ever closer to Shannon Capacity.
|
Does the continuous power-to-weight ratio of electric motors decrease with size? Background : According to the DEP overview section of this NASA paper , aviation companies are interested in distributed electric propulsion (DEP) because the scale-agnostic power-to-weight ratio of electric motors enables aerodynamic advantages from distributed propulsion. However, I'm struggling to make sense of that claim in the context of commercially available electric motors. Small air-cooled Hobby motors , supposedly have continuous power/weight ratios of >7 kW/kg while Siemens' aerospace-optimized AC motor has a "record-breaking" 5 kW/kg ratio with liquid cooling . What gives?! Question : Does the power-to-weight ratio of electric motors not change with size, or does power/weight vs size just not change as quickly compared to combustion engines? EDITS/Understanding thus far : According to Neil_UK, Brian Drummond, and Charles Cowie, a motor's ability to dissipate heat is proportional to surface area (\$Q \sim DL\$) while its peak power is proportional to volume (\$P_{max} = T_{max}\omega \sim(D^2L)\omega\$). Assuming weight is linearly proportional to volume (\$W \sim D^2L\$), then the continuous power/weight ratio of electric motors actually decreases with size because \$ P_{cont}/W \sim DL/D^2L \sim 1/D \$ Correct? <Q> Small hobby propellor motors have a high power to weight ratio because small motor and small propellors can easily operate at high RPMs, in the 10,000 RPM order of magnitude. <S> The cited paper is about motors in the area of 2000 to 3000 RPM. <S> Motor weight and volume is somewhat proportional to torque and not so much related to power. <S> The higher the speed, the higher the power for the same size motor. <S> The same thing is generally true for heat engines also. <S> See if the cited paper makes sense with that in mind. <S> I still don't understand the connection of these 2 arguments WRT the NASA paper. <S> The paper is lengthly and complex. <S> It cites several references to other papers. <S> This forum is intended to deal only with relatively narrow questions. <S> I only scanned the paper briefly. <S> If heat engines also follow this trend, then why are people pursuing DEP all of a sudden? <S> Serious large electric aircraft design is a new field. <S> We should expect to see a lot of approaches explored. <S> There is significant history in determining optimum number if engines for aircraft as illustrated below. <S> ... <S> the continuous power/weight ratio of electric motors actually decreases with size... <S> Since power = torque X rotational speed, a power/weight ratio is only meaningful if either torque or speed is constant for a given comparison. <S> In all aspects of this question, the balance of system (BOS) is an important factor. <S> The BOS includes the control and monitoring system, fuel storage and delivery system, lubrication system, cooling system, structural support and enclosure system and perhaps others. <S> Some parts of these may be integral to the motor. <A> Continuous power output is usually limited by cooling, the ability to get rid of the waste heat mainly produced by \$I^2R\$ losses in the coils. <S> In a conventional un-ventilated motor, the peak power output and the weight vary as the volume, a dimension cubed, however the ability of the motor to get rid of heat varies as the surface area, a dimension squared. <S> If you allowed this trend to continue as motors were scaled up in size, then the continuous weight to power ratio (note the way up that ratio is) would vary as a dimension. <S> This is why large motors are invariably ventilated, or even water-cooled, to reduce the dependence on surface area for cooling. <S> Adding water cooling to a motor adds weight, that isn't itself generating power. <S> There is another more subtle effect which exacerbates the scaling problem. <S> Thin sections of material are not as stiff as thick sections. <S> Stiffness in a motor is needed not simply for mechanical strength, but also to push mechanical resonances up in frequency to well above the motor operating speed. <S> This means wires and supporting members will tend to be thicker in larger motors for mechanical reasons than they would be for thermal reasons. <S> This means the continuous power output scales at less than dimension cubed, even with active cooling. <A> Hobby motors are normally run from battery packs that last a few minutes, so they aren't often given honest continuous power ratings. <S> Note the "continuous" rating under "specifications" in your linked example <S> is qualified by (180s) . <S> That is, they admit it can generate that rated power for 3 minutes max. <S> It's only starting to warm up at that point; a real continuous power rating will be lower, to avoid burning out the motor.. <S> Both points in the other answers are good too, though. <S> Neil is absolutely correct that smaller motors are more easily air cooled thanks to the increased surface area/volume. <S> Charles is correct to point out that increased speed increases power for "free" in an electric motor (from the point of view of efficiency and wasted heat; up to the practical limits of bearing speeds and material strength) while increased torque cost efficiency, through increased current and thus heat loss in the winding resistance. <S> So running smaller motors fast increases efficiency. <S> (However, large slow propellors produce thrust more efficiently, so expect to see fast motors geared down).
| A motor of a given size will provide the same torque over wide range of speeds.
|
Hardware debouncing of key matrix with minimum passive components I have recently bought this cheap 4x4 keypad matrix. It only has the push-buttons and nothing else therefore I want to add a proper debouncing to it. The debouncing in software is something I want to avoid because it takes some processing power out of my application. I know debouncing can be done using a resistor and capacitor (am I correct?) or schmit trigger (which requires another IC I guess and is out of question for me), so the question is, will I need to add R/C for each key, or can I get away with only one R/C pair per row or per column? Any suggestions? <Q> Let's be clear about this. <S> If you have a keypad matrix you are already using processing power to apply sequential logic voltages to the rows or columns then reading the columns or rows back in order to determine the button pressed. <S> So, each time you get a "result" i.e. you detect that a button has been pressed, you mark that event as "pending" and some time later (10 to 20 ms) you check again to see if the button press you marked as "pending" can be judged to be "actual". <S> How much more processing time this needs is very little in the bigger scheme in my opinion and if you are so close to the limit at which your CPU can operate then get a bigger/faster CPU or increase the clock speed. <S> Using Rs and Cs can work but, in all cases it will produce a "slow" output that would need to be schmitt <S> triggered to clean up the slow edge to a fast edge that is suitable for the logic that follows. <S> You might get away with it of course <S> but then you have a fixed solution with no flexibility. <S> Having said all the above, you might also need capacitors from each matrix line to ground to avoid ESD/EMC issues. <A> To get the best debounce performance, it would likely be best to have one R/C for each button. <S> However, you should still get decent results with one per row/column. <S> Just depends how critical it is really. <S> If you want to do it with the minimum amount of components, why don't you try doing one per row/column first, then taking some measurements and seeing if the result is good enough for your application? <S> If the results aren't what you wanted, then go ahead and add some on each button, then try again. <A> As a long-time embedded software engineer, I have to say that your assumption that debouncing will take some processing power out of my application is simply incorrect. <S> This will never be true for any competently-written firmware. <S> Naturally, debouncing will require some processing. <S> However the processing is trivial, and for user input will be happening at such a low update rate as to be utterly negligible. <S> If you needed to debounce inputs with update rates in tens of kHz, perhaps the processing for debouncing would be significant, but a human pressing buttons does not need anything like that kind of resolution. <S> In your case, 100Hz sampling would be easily fast enough, and you could almost certainly drop it as low as 10Hz sampling without seriously affecting your user interaction. <S> If you're trying to do input processing in a main control loop running at tens of kHz, of course it'll suck processing power. <S> The correct solution is to write firmware which does not do it that way though, not to use a hardware solution to fix a software anti-pattern. <S> Appropriate use of timers and interrupt priorities will give you what you need. <S> You can optimise the processing by making sure the read-back is all on one <S> I/O port. <S> Assuming that you're setting levels on columns and reading back the rows, then you bit-AND, bit-shift and bit-OR to build up a 16-bit value for the 16 pins. <S> XOR this with the previous 16-bit value, and if this is non-zero then something changed. <S> You do need to check only one button is pressed, of course. <S> If you've got an ARM processor, the ARM has an instruction to report how many bits are set, which is ideal for this. <S> Just mentioning for a further optimisation. <A> This will prevent you needing CPU power, but cost some extra wiring/PCB space. <A> True debouncing involves the addition of hysteresis. <S> You can't do that with passive components e.g. resistors and capacitors. <S> That's where either a software or active component (latch) solution come in.
| A simple debounce algorithm is just to set a counter to a value if the pins change state, pick a state if the pins kept their state and the counter is zero, and decrement if it's not zero. Afaik, there are ICs that can debounce 'automatically', e.g. MC14490, called a Switch Debouncer.
|
Can you fully charge a super capacitor at a lower voltage (than its max rating) Is it possible to fully charge a super capacitor at voltage lower than its listed max rating? Example: could I charge a 5,5V or 6V super capacitor using only 5V.In case it matters, two example super capacitors that have a higher than 5V maximum rating. VEC6R0 255QG; 6V , 2.5F http://www.farnell.com/datasheets/2149089.pdf DDL105S05F1JRR; 5.5V , 1F http://www.farnell.com/datasheets/2368882.pdf Gut feeling is that it's not, but I have not been able to verify this as of yet. It's probably too obvious and thus not stated.. <Q> @Curd is right. " <S> fully charged" bears no meaning. <S> Often, it just means that the capacitor voltage reached (of came very close to) the supply voltage. <S> So, in this case, yes, you can fully charge it, at whatever voltage you want. <S> Now, if you want it to mean "store as much energy as it possibly could without exploding", then, no, and you'd better not even attempt to fully charge it. <S> In fact, you'd better ensure there is a small margin between the supply voltage and the capacitor rating. <S> Using 6V/5.5V rated capacitors with a 5V supply seems reasonable (unless the 5V supply has a 20% tolerance, in which case you should use even higher ratings). <A> It depends on what you mean by "full charge". <S> If you mean charge it up to its rating, then no, if you use a source voltage <S> that's less than the rating, <S> by definition you won't get a "full charge". <S> You could use a DC-DC converter to increase the available source voltage to the rated voltage of the capacitor in order to get a full charge in the first sense. <A> I have seen folks ask this about capacitors before and had arguments with people about it. <S> Some people seem to think that if the manufacturer rates his devices at 6V it must be safe to charge it to that level. <S> In practice you may get away with that most of the time. <S> The manufacturer's procedures may provide some always positive tolerance on the part that will let it withstand a little more than the specified voltage. <S> But there are zero guarantees of that. <S> Moreover, you already have tolerances in whatever you are charging the thing with. <S> Your 6V may actually be 6.3V if your driver tolerance is 5%. <S> As such you are venturing into unknown territory. <S> Further, the values specified by the manufacturer are given at a specific temperature, usually 25C. That value will vary with how cold or hot the device is, and you need to leave sufficient head-room in your design to allow for that. <S> Maximum ratings should always be treated as just that. <S> It is a point where the supplier will no longer guarantee it will survive, but it is not a hard edge, it could fail anywhere around that point depending on tolerances and temperature. <S> If you have a bridge, and it is calculated it will not fail up to 10,000 Tonnes, you would be a fool to load it with 10,000 tonnes, especially on a really cold and windy day. <S> An elevator has a maximum occupancy of say 10 people, but it will, or should be, designed with a factor of safety just in case those 10 folks are portly. <S> The same goes for electronic parts, components should always be de-rated in your design calculations. <S> How much you de-rate depends on how accurately you can guarantee you can realistically drive them. <S> Even then, subtract a bit more... just in case.
| If you mean charge it up to the source voltage, then yes, you can get arbitrarily close to "full charge".
|
IC voltages and low power design, What todo? I’m working on a low energy device consisting of a couple of IC’s. What is confusing to me is the following, All ICs has a vcc supply range of 1.7 to 3.6V and most of the time the current reading is given @3.3v supply. Chasing the most efficient design what is actually better running at the bare minimum supply ( assuming the internal LDOs has less power to dissipate ) or run at 3.3v? I have tried to read if there is reduced performance at lower voltage but it is not apparent. The other question that then comes of the back of the previous question, if all ICs has a supply range 1.7 to 3.6 and all IO pins is rated to 3.3v can I run some ICs at 1.8v and other at 3.3v without effecting let’s say SPI or I2C communication ? These are the ICs MCU infocenter.nordicsemi.com/pdf/nRF52832_PS_v1.4.pdf Radio semtech.com/uploads/documents/sx1272.pdf GPS u-blox.com/en/product/sam-m8q-module. I found a new MPPT DC reg from ST this one. st.com/content/st_com/en/products/power-management/… The idea is to run the MCU from the 1.7v LDO and only when Battery charge is sufficient enable the 3.3 VDO that feeds the rest of the system. My concern would be the UART and SPI coms between GPS and Radio IC... and MCU Any advice is appreciated <Q> Ideally, it would be best to keep the majority of your circuit at the same voltage. <S> The reason alot of datasheets will have the readings at 3.3V (for a 1.6 - 3.6V device) is because that is a commonly used voltage. <S> You will find a lot of datasheets will have multiple graphs showing the operations of devices at different voltages and currents. <S> Mostly you will find the voltages to be 3.3V, 5V, 9V, 12V and so on, as they are very common. <S> And if yu want different supply voltages to different ICs, you will probably need to level shift signals, so I wouldn't bother. <S> Stick with 3.3V, or if you are really concerned, you'll just have to perform your own tests at different supply voltages. <A> If we knew what specific chip you were talking about, it could help. <S> If the ICs are rated for 1.7-3.3V, you can certainly run them at 1.7V with no issue, although be aware of possible reduced operating ranges as glen_geek points out. <S> An example of this would be page 346 of the Atmega88PB datasheet . <S> You can also see from table 34-4 that the scale up of current vs speed/voltage is not linear. <S> 1MHZ <S> at 2V is max 0.5mA (1 mW/MHz), whereas 8MHz at 5V is max 9ma (5.6 mw/MHz). <S> This implies that, as long as you don't need a high speed for some other reason, it's better to use a low speed and voltage. <S> As MCG notes, level conversion is often necessary for bidirectional communication at different supply voltages, although this need not be power hungry, as there are devices available with quiescent currents in the microamps. <S> edited to answer comment:The more complicated MCUs like the NRF52832 are a bit of a special case, because they have internal regulators, and internally only run at one voltage (in this case 1.3V). <S> That chip in particular has a DC/DC switching converter, which can convert at a high efficiency, so the actual input voltage is less important if this is selected (although lower is still better because the efficiency is not 100%). <S> If using the internal LDO, then it doesn't really matter if you convert externally or internally EXCEPT <S> I'm not 100% sure what happens with the GPIOs in terms of power. <S> The GPS also includes an LDO, so you dont get any gains from LDOing beforehand. <S> This changes, however, if you use a switching converter. <S> This sort of shows a general principle, which is that advanced RF systems (i.e. not just an OOK SAW modulator) will only run at a specific voltage. <S> They will therefore tend to have an integrated regulator, which is either an LDO or a DC/DC converter. <S> If an internal LDO, you can get significant savings from externally using a DC/DC regulator. <S> If it's an internal DC/DC, you can get some limited savings from using a higher efficiency DC/DC converter, but really not that much probably. <A> Generally power dynamic power consumption of a CMOS MCU will be roughly proportional to the square of the supply voltage, and proportional to the clock frequency <S> so you can potentially get big savings by reducing the power supply voltage (staying within specs to guarantee operation over temperature, of course). <S> If you have to lower the clock speed to stay within guaranteed operation parameters, the required calculations (clock cycles) will take longer (in microseconds). <S> This does not mean more power directly because the current will further drop proportional to the clock frequency but the cost of keeping the other peripherals alive rather than sleeping between bursts of calculations may make a significant difference. <S> There is typically also leakage, which can be significant, especially at higher temperatures, and often peripherals add significantly to the current (eg. <S> brownout reset, analog peripherals and so on). <S> You may see linear savings in consumption with such parts of the micro.
| I would just stick with the 3.3 if I were you, as you can get a known performance, and some devices may not behave as well in the lower supply regions (depending on what load you are driving etc) From looking at IC datasheets from Atmel and TI, it generally looks like you get lower current consumption at lower voltages, at least for digital/switching circuits.
|
Actual electric potential at terminals of battery I understand that a 9V battery will produce an electrical potential difference/voltage of 9v across the positive and negative terminal, with the positive terminal having the higher potential by definition. I am aware that the actual electric potential at the positive terminal may not be 9V or 0V for the negative terminal but the difference in the electric potential is 9v. Wikipedia tells me that the electric potential of a point is the amount of energy needed to move a unit positive charge from a reference point, usually infinity, to that point. I do not understand how this relates to circuits. My question is really what is meant by the actual electric potential at the terminals of each battery and is it possible to measure? thanks <Q> Potential is always measured relative to some reference point. <S> This can be the earth, the moon, the car chassis, the negative or positive terminal of the power source or even to an AC signal. <S> For most practical applications we don't use infinity as a reference but rather something much more local. <S> simulate this circuit – <S> Schematic created using CircuitLab Figures 1 to 8. <S> Various measurement and reference schemes for a 9 V battery and voltmeter. <S> The voltmeter measures the battery voltage as there is a complete circuit. <S> In this case there is a circuit through local earth. <S> The voltmeter shows 9 V. <S> This represents a chassis connection. <S> Again 9 V. <S> In this case some arbitrary point in the circuit has been taken as ground. <S> In most battery powered equipment this will be the DC negative supply. <S> One recent question on this site showed an old transistor radio circuit with the battery positive as the "ground". <S> We have the option of connecting the circuit ground to earth. <S> This prevents floating of the device power supply and might be used for safety, to avoid audio hum, etc., depending on the application. <S> Without a reference for the voltmeter it will read 0 V. Note that a very sensitive digital meter will show random readings due to stray electric fields. <S> Putting a medium resistance (say 100k) across the meter terminals will cause the reading to collapse to zero. <S> Inverting the battery we now have a positive ground. <S> The voltmeter will read -9 V. Inverting the meter so that its positive lead is grounded will also result in a -9 V reading. <S> For most practical electronics you just need to work out the potential between points. <S> When debugging it is very often most convenient to attach the negative probe to the circuit ground and take all readings with reference to that point. <A> Wikipedia tells me that the electric potential of a point is the amount of energy needed to move a unit positive charge from a reference point, usually infinity, to that point. <S> I do not understand how this relates to circuits. <S> It doesn't relate to circuits. <S> It's a definition of potential, but not a practical one. <S> While each terminal of a 9v battery does have a potential with respect to a reference point at infinity, it's not a stable or useful potential, as it takes so little charge to change it. <S> For instance, a 9v battery may have a capacitance to infinity of a few pFs. <S> Adding just a few nC of charge to it, as you might easily do by walking across a carpet with it, would change the potential by thousands of volts. <S> Connecting it to another body using a resistance of 100Mohm (about the highest value of resistor whose accuracy won't be trashed by surface contamination if you touch it) would equalise the potentials with a time constant of mS. <S> You touching it (body resistance 100k) <S> would get it to the same potential as you with a time constant of \$\mu <S> S\$. <S> It's not very useful though. <A> The units of volts (a joule per coulomb); moving 1 coulomb of charge across 1 volt of electric potential requires 1 joule of work. <S> From Q=CV and a battery with a given mAh capacity in 1h <S> It means you can compute the equivalent C of any battery if it starts at Vi and is depleted at Vf where final hipothetical energy ( if it were not depleted of charge) using. <S> E[Joules] <S> = <S> 1/2C(Vi² <S> -Vf)² <S> This should be approx the same energy=power * time accumulated for each smal increment of time as voltage and current changes with instantaneous power = <S> V(t) <S> * I (t) <S> In practice we use <S> mAh *Vavg = <S> mWh <S> then /3.6 ks = <S> Joules
| It is possible to measure the potential to ground of an isolated battery with care, and a very, very high input impedance meter (often called an electrometer).
|
Why use JK flip flops when D flip flops are simpler? To the best of my knowledge, We can do everything that a JK flip flip can with a D flip flop. So what is the need for the JK flip flop which has a more complex excitation table and more inputs? I'm sure there must be some applications where D flip flops won't do, otherwise JK flip flop wouldn't have been invented. I would be grateful if you could let me know. <Q> The simplest answer is that D flip-flops are MORE complicated than JKs. <S> Logically, a D FF is a JK FF with an extra inverter between the J and K inputs, like so simulate this circuit – <S> Schematic created using <S> CircuitLab while converting a D to a JK requires much more logic (which I am too lazy to draw out. <S> Trust me.) <S> so conceptually a JK is simpler by one gate. <S> Of course, this may not be reflected in actual transistor count, but I'm not an expert on IC design. <S> And while it's true that the two can be made equivalent with some extra gates, some logic functions require fewer external gates depending on the desired function. <S> It all depends. <S> If your underlying question is, "Why do they make us study JK flip-flops when D FFs are simpler?" <S> And finally, you have the historical sequence wrong. <S> JK flip flops were produced before D types due to their more general nature. <S> For instance, in the classic 7400 TTL series the first flip flop is the 7470, a gated JK. <S> The first D type is the 7474, although eventually that eclipsed other types. <A> The JK flip flop evolved from the SR latch to give a general purpose synchronous element. <S> Understanding the evolution SR latch JK Flip Flip JK master slave is valuable in teaching principles. <S> Its rich input choices were valuable back in the days when putting logic on the input involved wiring individual pins of gates in a TTL package together to make a logic function. <S> In some cases this cost could be avoided or simplified by making use of the JK truth table. <S> Today creating logic functions has little or no cost using devices from EPLDs to ASICs. <S> So one simple logic element the D flip flop is all that is needed. <A> A very simple example- compare the logic gates required for a multi-bit synchronous counter made with ordinary D flip flops vs. J-K flip flops. <S> Ex-or gates are more complex than AND gates. <S> The FPGA chip I'm working with atm has D flip flops with a clock enable input, which are as good as J-K for this purpose.
| the answer is that JK FFs are more versatile, specifically because of the more complex logic table.
|
Finding number of pole pairs in a BLDC motor I have a 3phase CDROM BLDC motor for which I don't have any datasheet. How can I find out the number of pole pair in the motor? Hint:-With a magnet, I could check that there are some N and S poles in the rotor inner layer but how can I know the exact number of pole pairs.? Thanks, Charles Cowie. But I have another bldc motor which has 3 windings at 120 degrees apart. But its datasheet says is a 4 pole pair motor. Can you explain how it is possible.? <Q> I have completely revised my answer considering the information in the question and the comment: <S> I think it has more than 10 pole pairs but not sure exact number. <S> I know it because it takes more than 10 electrical commutation cycles to complete one mechanical revolution. <S> The photo seems to indicate this is a wye connected motor with the neutral point brought out for external connection. <S> There appear to be three individual stator-winding conductors attached to three solder points at the bottom of the picture. <S> Just to the left of the bottom, the ends of the three conductors appear to be twisted together and soldered to a fourth solder point. <S> It is also obvious from the photo that this motor has salient-pole windings in the stator. <S> The rotor magnets could also be considered to be salient poles. <S> The apparent construction is then a doubly-salient permanent-magnet motor (DSPM motor). <S> A DPSM motor can have different numbers of poles on the stator and rotor. <S> The stator could have 12 poles with the phases distributed alternately among the poles. <S> It could also have 6 or 4 poles with 2 or 3 phases making up each phase. <S> If the stator can be disconnected from the driver, a small DC voltage could be applied between each phase and neutral to determine which coils are magnetized by each phase and which are north and south. <S> A diagram of the results can probably be used to determine the number of poles. <S> With a DSPM motor, the number of poles in the stator does not have to match the number of poles in the rotor. <S> In that respect, a DSPM motor is similar to a stepping motor. <S> There may also be a similarity to some switched reluctance motor designs. <S> Take note of Bruce Abbott's advice about the possibility of demagnetizing with a strong magnet. <S> You could just use a piece of steel, but you would have difficulty finding repulsive regions. <S> Perhaps a very small magnet that is not Neodymium would be ok. <A> With a magnet, I could check that there are some N and S poles in the rotor inner layer <S> Best not to use a magnet (a Neodym magnet is strong enough to demagnetize a Ferrite magnet). <S> Just use a screwdriver or other object made from ferrous metal. <S> It will be attracted to each magnet pole in the rotor, so mark the first attraction point and move it around the circumference of the rotor while counting poles until you arrive back at the start. <S> I have another bldc motor which has 3 windings at 120 degrees apart. <S> But its datasheet says is a 4 pole pair motor. <S> Depending on magnet configuration and winding pattern, The number of stator arms or slots may be higher or lower than the number of magnet poles. <S> The chart below shows some example combinations (blue boxes are known good combinations, orange may work but were not tested). <S> You can see that a 3 slot motor may have 2 or 4 magnet poles. <A> First column is the rotor pole number (permanent magnet), second column is the phase assignment for 3-pole stator, third column is for 6-pole, then 9-pole, 12-pole, and 15-pole. <S> 3 6 9 <S> 12 <S> 152 <S> ABC <S> AcBaCb <S> AccBaaCbb <S> AccBBaaCCbbA <S> AAccBBBaaCCCbbA4 <S> ACB <S> ABCABC <S> AcaCbcBab <S> AcBaCbAcBaCb <S> AcBBaCbAAcBaCCb6 <S> ABCABCABC 8 <S> ABC ACBACB <S> AabBbcCca <S> ABCABCABCABC <S> ABabABCbcBCAcaC10 <S> ACB <S> AbCaBc <S> AacCcbBba <S> AabBCcaABbcC <S> ABCABCABCABCABC12 <S> ACBACBACB <S> 14 <S> ABC <S> AcBaCb AbaBcbCac <S> ACcbBAacCBba <S> AaABbBbBCcCcCAa16 <S> ACB <S> ABCABC <S> AbbCaaBcc <S> ACBACBACBACB <S> AaACcCcCBbBbBAa20 ABC ACBACB <S> AccBaaCbb <S> AbCaBcAbCaBc <S> ACBACBACBACBACB30 <S> ACBACBACB Again <S> , in the table below the first column is the rotor pole number (permanent magnet), second column is the phase assignment for 18-pole stator, third column is for 36-pole. <S> 18 <S> 362 <S> AAcccBBBaaaCCCbbbA AAAccccccBBBBBBaaaaaaCCCCCCbbbbbbAAA4 <S> AccBaaCbbAccBaaCbb <S> AAcccBBBaaaCCCbbbAAAcccBBBaaaCCCbbbA6 <S> AcBaCbAcBaCbAcBaCb <S> AccBBaaCCbbAAccBBaaCCbbAAccBBaaCCbbA8 AcaCbcBabAcaCbcBab <S> AccBaaCbbAccBaaCbbAccBaaCbbAccBaaCbb10 <S> ABabcBCAcabABCbcaC <S> AcBaaCbAcBBaCbAccBaCbAAcBaCbbAcBaCCb12 <S> ABCABCABCABCABCABC <S> AcBaCbAcBaCbAcBaCbAcBaCbAcBaCbAcBaCb14 <S> ABbcaABCcabBCAabcC AcBCbAcBabAcBaCAcBaCbcBaCbABaCbAcaCb16 <S> AabBbcCcaAabBbcCca <S> AcaCbcBabAcaCbcBabAcaCbcBabAcaCbcBab20 <S> AacCcbBbaAacCcbBba ABabcBCAcabABCbcaCABabcBCAcabABCbcaC30 <S> AbCaBcAbCaBcAbCaBc AabBCcaABbcCAabBCcaABbcCAabBCcaABbcC
| To determine the number of poles in the rotor, count the number of poles by carefully moving a magnet around the inside circumference and noting attractions and repulsions.
|
Why phase noise is critical in communication systems I see expensive OCXO and TCXO oscillator on market that used on Milcom and Satcom transceiver.I'm wondering on what application phase noise of more than -150dBc/Hz in 10KHz is critical and how such a high performance can help the communication system vs cheaper oscillators with -120dBc/Hz in 10KHz Note: I'm not asking for the effect of phase noise on communication system. I'm asking for what application the phase noise are so important that they should use TCXO or OCXO oscilator with such a high performance. <Q> Phase noise can be detected by a phase detector. <S> This means that in a FM receiver there will be more audible noise when the local osc has phase noise because phase noise is demodulated in the FM detector. <S> Digital systems that rely on phase information will experience higher error rates. <S> AM systems that use envelope detection are relatively insensitive to phase noise. <S> If a transmitter has phase noise its spectrum can extend out more than intended spilling into adjacent channels. <S> Phase noise can be thought of as short term drift. <S> Old school LC oscillators with valves had terrible long term drift but good phase noise performance. <S> Synthesizers have basically non-existent long term drift if the reference osc is good. <S> However phase noise can be very bad. <S> If a VCO gets noise on its control voltage pin there will be lots of phase noise. <A> For ADC/DACs , it is quite visual. <S> Let's sample a signal (image from wikipedia): <S> The point at t=1 is on a high slew rate part of the waveform. <S> Phase noise on your clock is a frequency domain concept, which corresponds to jitter in the time domain. <S> Jitter adds time noise to the sampling instant. <S> Thus, here, our signal at t=1 has a voltage v and a slew rate dv/dt. <S> With "n" the amount of time-domain noise (jitter) <S> the sampling instant is now t=1+n <S> Thus the value acquired is now v + n dv <S> /dt <S> In other words sampling jitter introduces noise that is proportional to the product of jitter and slew rate. <S> For fast ADCs with enough bits, the manufacturer will usually explain in the datasheet that the specs will only be met if the clock has less than a specific jitter. <S> divB posted this graph in the comments, it's quite explicit: <S> This is compounded by the fact that you can only get low phase noise crystal oscillators at "low" (by today's standards) frequencies. <S> If you need 1GHz some PLL multiplication will be required, and as Tony Stewart mentions, this degrades phase noise. <S> An intuitive explanation of this is that the PLL can't remove time-domain jitter in the original clock outside of its filter bandwidth, so this jitter is also present in the output, but it is larger relative to the shorter period of the higher frequency output signal. <S> Expressed in phase noise terms, this gives the equation quoted by Tony. <S> Another one: here's your carrier. <S> Ignore the legend, this is just an image from the web as an illustration. <S> Say you receive a signal, and multiply it with the carrier of frequency in order to demodulate it. <S> The resulting spectrum is the convolution of the carrier spectrum and the received signal spectrum. <S> This means the two phase noise peaks at +/- <S> 100kHz from carrier will grab the noise at these frequencies and fold it back on top of the signal you actually want. <S> This degrades SNR, especially in multiple close carrier modulations. <A> Phase Noise in dB adds up by 20 <S> (log N) when frequency is multiplied by N from the Xtal to the PLL <S> f out. <S> For example, deriving a 1 GHz signal from 10 MHz will increase the phase noise by 40 dB. <S> Even if the 10 MHz oscillator has a very low phase noise floor of -175 dBc/Hz, for example, the lowest possible floor at 1 GHz is -135 <S> dBc/Hz, even before the noise added by the multiplier or PLL is taken into account. <S> A cheaper 10MHz XO at -125 <S> dBc/Hz @ <S> 10kHz offset multiplied with 40dB rise to 1GHz would be -85 <S> dBc/Hz @ <S> 10kHz offset in theory. <S> Generally TCXO's have the same Phase noise as XO's using AT-cut Xtals except they are temper compensating parts from 20ppm to 1 ppm or 50 ppm to 2 or 3 ppm over a wide temp range. <S> But OCXO's use SC-cut crystals which have a Q of 100k~1M, <S> compared to AT-cut Xtals with Q=10k+ so f stability also reduces from 20ppm to 20 ppm <A> Satellite links for LEO/MEO orbits will create a Doppler shift because of the relative velocity of the receiver and transmitter. <S> keeping an accurate reference oscillator in terms of PPM frequency offset can help with a frequency error budget. <A> Security radios, used by fire and police, need to cooperate at the scene. <S> This cooperation requires transmitter phase noise to not de-sense another user's receiver; hence the -150dBc/rtHz requirement at 10KHz offset. <S> If you mean integrated phase noise, in 10KHz bandwidth, to be -150dBc/rtHz, this likely is required due to frequency multiplication from 10MHz to 20,000MHz (20GHz carrier to/from the satellites) with the requirement (as in the first paragraph) to not de-sense the users in adjacent channels.
| While you're interested in the specifics of phase noise, the other requirement that a OCXO or TXCO could meet is an absolute frequency error requirement.
|
Should I use a solid connection or thermals? I am using a component with LGA package. Should I use a solid connection or use thermals between pads and GND plane? <Q> Rule of thumb: If it'll be soldered, use thermals. <S> Period. <S> Otherwise the heat will be wicked away as I am trying to apply solder (or as it is being reflowed) and you'll get uneven heating, leading to poor solder flow. <S> I do not use thermals on vias, however, because they generally are not soldered. <S> If I plan to use a via as a test point, only then would I give it a thermal connection to the pour because I may need to solder a jumper to it. <A> If it is an SMT pad, and is meant to go through a commercial reflow oven, don't use thermal relief on the pads. <S> The reflow oven will provide the even heat needed to melt the solder consistently. <S> This will admittedly make manual rework more difficult, but manufacturers I've worked with are confident enough that reflow will solder everything effectively, that they don't insist on thermal relief on SMT. <S> Given that, I'd prefer to have the solid connection to ground. <A> Thermal pads for copper to prevent misalignment bridging DFM issues. <S> Segmented or crosshatch pattern for solder stencil. <S> Thermal barrel <S> at least 25 um - will need more plating thickness. <S> ( ask supplier) <S> More are better. <S> example optimum final diameter 350 um ideal distance from hole to hole (pitch) is 800 um
| In my designs I always give my component pads thermals when connecting to a copper pour.
|
It is feasible to use DC rather than AC in transmission lines? I have read that in Mexico there is a project to build a network of distribution lines (over 1000 km), generators and substations using direct current rather than alternating current. These are rated to provide 3000 MW. As far as I understand this would require too many substations to compensate for losses (due Joule effect) and the physical size of the lines would be too large. Isn't this highly inefficient and too expensive? Requeriments of the project, in spanish <Q> While I cannot speak for the specific project you are talking about (my Spanish isn't so great), DC transmission lines are most definitely feasible. <S> The List of HVDC projects <S> should be enough to show you that it is definitely feasible under certain conditions, as there are quite a lot of HVDC transmission lines in use. <S> Additional advantages include lower losses in underwater/underground transmission lines (with AC underwater lines, the capacitive losses can be quite high), and even on regular air lines because HVDC lines do not transmit reactive power. <S> Also, if an AC transmission line is built for a given AC voltage, the insulation must withstand the peak voltage, while RMS voltage and therefore power is ~70% of that. <S> An HVDC system utilizing the same transmission line can operate at the peak rated voltage, achieving ~40% higher power throughput. <S> the physical size of the lines would be too large. <S> This seems like you are referring to the issue with low voltage transmission, which, irrespective of AC or DC, will incur I2R losses. <S> But this is not something inherent to DC transmission (high voltage is used instead), rather this is inherent to low voltage transmission. <A> Skin effect can cause problems with AC transmission: - For an aluminium conductor at 50 Hz <S> the skin depth is about 10 mm <S> so if the power carrying conductors are large (as would be expected with 3000 MW) the centre of the conductor will be passing very little current and naturally the resistance of the cable <S> would be much higher compared to a DC carrying conductor: - To mitigate this, AC cables are designed as multiple conductors with spacings like so: - <S> But you also get proximity effect with two conductors spaced a little apart to minimize skin effect problems: <S> - Just like skin effect, proximity effect will also cause a reduction in the amount of copper (or aluminium) being used in the electron transportation so it's a balancing act to design a decent conductor for AC power transmission. <S> DC power transmission doesn't suffer from skin or proximity effect and can use significantly smaller conductors for transmission of current. <S> The down side to DC voltage transmission is you can't use regular AC transformers. <S> Having said that a 2000 MW cross-channel DC link was used to bridge between the UK and France in 1986 so these problems are no-doubt mainly overcome. <S> The down conversion from 270 kV DC was done by: - The system was built with solid-state semiconductor thyristor valves from the outset. <S> Initially these were air-cooled and used analogue control systems but in 2011 and 2012 respectively, the thyristor valves of Bipole 1 and Bipole 2 were replaced by more modern water-cooled thyristor valves and digital control systems supplied by Alstom <A> The domino effect with AC grid systems and dynamic loading with regions running lower and higher in frequency makes HVDC transmission the solution, not the problem. <S> HVDC with a flexible HVAC distribution is far more favorable to increase transmission efficiency ( albeit greater insulation and converter cost) better utilization of networks, and balance capacity with 4 new HVDC feeders in Mexico. <S> The cost of construction, maintenance and ownership has proven that this mix results in the lowest cost of ownership from what I recall. <S> I recall working in new HVDC station almost 50 yrs ago. <S> I was a summer student with a portapac climbing scffolding 6 stories high to clean the ceilings just after construction of the Dorsey HVDC station in WInnipeg. <S> It was designed by English Electric and is a major source of cheap hydropower. <S> On another personal note my former colleague spent $85 in fuel driving his Mitsibishi car there. <S> But with the growth of e-cars and overnight charging, capacity must increase dramatically if it is ever going to replace carbon-based fuels.
| High-voltage DC (HVDC) has several advantages over AC transmission, one of the more significant ones being that it allows power to flow between AC grids with different frequencies or phase angles.
|
Why does Macbook (with a alumnum shell) use a 2-prong plug instead of 3-prong plug? Here says that devices with metal shells usually use three-prong plugs. Three-prong plugs are for appliances that require the ground connection for safety. Most appliances that use a metal chassis require a separate ground connection. I am confused about this. My Macbook Pro uses alumnum case, but why it can still use two-prong plugs? Will it increase the risk of electric shock? Does anyone have ideas about this? Thanks! <Q> The Macbook has an external power supply that converts AC to DC. <A> The macbook is a class III appliance, meaning that it is powered by a safety extra-low voltage supply. <S> The low voltage guarantees that coming to contact with energized parts poses no risk in normal circumstances. <S> The actual appliance from a safety perspective isn't the macbook, but the power brick. <S> The line input must be either Double insulated (class II appliance) from all user accessible parts of the device, including the low voltage output. <S> The idea is to guarantee that any fault cannot make the chassis live, and this is accomplished with sufficient isolation. <S> No safety earth lead is needed. <S> Single insulated, but with the chassis (all user accessible conductive parts) earthed (class I appliance). <S> The idea is that any fault is shorted to ground, keeping the chassis of the appliance at a safe potential. <S> A three prong plug obviously required, and such a power supply should only be used with grounded outlets. <S> This approach also makes suppressing electromagnetic interference simpler. <A> Power adapters (with AC high voltage inputs) <S> that are rated <S> Class 2 are safe without grounding,and (of course) <S> a MacBook Pro is battery operated and must operatewithout ground connection, or it wouldn't work at all. <S> A laptop computer has low voltage (battery) power, and <S> the small amount of HV supplied to its backlights <S> is 'isolated' <S> so a groundconnection wouldn't complete a circuit (thus, wouldn't protect).That's OK, because a grounded human also wouldn't complete a circuit,and wouldn't get a shock.
| The power supply provides the necessary safety and isolation, so you don't need earthing of the Macbook case.
|
How to solder connections to this board Below is a picture of PCB that has a bunch of copper(?) points for connections. What are these connections called and how would someone solder wires to these points. When trying to solder 30 AWG wires it would seem to work if it was a one-off but when there were multiple next to each other it seemed too tight. Also what does TB stand for (each point seems to be named TBxx) <Q> Those are test-points, in your example gold plated ones, and are not intended to be used to solder wires onto them. <S> Normally they are used with a bed-of-nails using spring loaded pogo-pins to make contact to the circuit using testing equipment. <S> Pogo-pins come with a variety of tips for contacting with various features on the board. <S> Note the use of the large pointed pins in the above image that go through the mounting holes in the board that act to align the board with the pins. <S> The big fat ones are keyed to the edge of the board to get it aligned close to the smaller ones that go through the holes. <S> The alignment pins are longer than the pogo-pins so the board is in the correct position before they make contact. <S> Sometimes double sided, "suitcase", fixtures are used... <A> While it's true that these would be contacted by pogo pins rather soldered in production, for a personally owned unit (or indeed, for the original firmware developers') <S> soldering would be quite reasonable and likely. <S> Fine gauge silicone insulated wire is best for this, since you avoid the whole complication of potentially melting the insulation while soldering with only a couple of millimeters of stripped length, and you can with a little care put a crimp connector on the other end to connect to your programmer, USB/serial adapter, logic analyzer, or whatever. <S> But you can also use wire-wrap wire with a bit more care. <S> Others prefer magnet-type wire insulated with a solder-through insulation. <S> While the pads are close together, compared to what they could be, they are really not that tight <S> at all - be glad you aren't trying to pick up the signals from the 0402 resistors or the pads of that little QFN chip. <S> Pre-tin <S> the wire <S> (I tend to start overly long and trim to about a millimeter exposed after tinning), possibly pre-tin <S> the pad, <S> and you just need to hold the wire in place while you touch it for a second with the iron. <A> those are gold plated test points <S> you would not normally solder wires to those <S> but if you really, really, really have to, apply solder to the pad, then apply solder to end of wire, then join the two <A> I had a design, built by someone else that used extremely small test pads like this. <S> I needed to have the pad constantly hooked up to an oscilloscope, so I needed a permanent wire attached to the pad. <S> I found that it was VERY EASY to rip the pad off and kill a board when soldering a wire to it. <S> What I found is using a 30 gauge magnet wire <S> (yes, you have to tin both sides) <S> you can solder on to the pad, and it isn't so heavy to rip away at the pad. <S> To prevent ripping off the pad as your wire bends around, if you have it connected to a scope or something else, was to put a dab of hot glue on top of the wire. <S> The hot glue isolates the wire from bending and flexing, so you aren't pulling at the pad. <S> I learned this the hard way killing many boards, while testing this design. <S> IMO: if you think you need a test point, consider a thru-hole first. <A> SMD pin headers may or may not work - <S> the geometry of a test point is different from that of a SMD pin header solder pad. <S> A (regular) wire-through pin header might work in a pinch, but will have a less mechanically stable solder connection to the PCB. <S> In any case, test points are meant for temporary connections, not for permanent connection with useful life expectancy. <S> It might help to additionally glue the wires to the board for added stability. <S> Note that test points might not only be designed to be used for "live tests" of the operating circuit, but also be meant for "off-line" tests of traces or single components. <S> Adding a wire to such "off-line" test points may cause the circuit to malfunction (because of the added capacitance).
| For long rows of evenly spaced test points, you might try to solder pin headers to them - as long as the test points are laid out in one of the standard grids (1/10", 1/5", 3/20", IIRC).
|
How to drive a cd4051 analog mux with buttons as channel selector? I 'd like to control an analog multiplexer cd4051 as a 8-1 switch with buttons that will act as channel selector (1-8)I know how to do it with Arduino but is there any other way to read the state of 8 buttons and provide a 3 bit word as separate digital signals? <Q> Use a priority encoder like a 74xx148 <S> Or if you prefer CMOS a CD4532 <S> Only one switch should be active at a time so this worked best with ganged switches, either push buttons like the one below or a multi-position rotary one. <S> But then again, one has to wonder, <S> if you have all those switches, why do you need the CD4051? <A> This will let you manually select the channel, but you will need to do the selection using a combination of the 3 switches rather than as one button per channel. <S> Harder solution <S> - Add an 8 to 3 priority encoder such as SN74HC148 . <S> If you use 8 SPDT switches as the inputs, as long as you only turn on one at a time, it should do what you want. <S> However, it's a little more cumbersome than just pushing a button since you have to turn on the switch you want and turn off the one you don't. <S> Hardest solution <S> - Add a flip flop to each input. <S> This memory cell lets you use an instantaneous button press to "set" one of the 8 flip flops. <S> You could use a 74HC574 . <S> If the 8 inputs are normally low, and pressing the button brings the input high, all you need is a "clock" signal which latches the 8 button states in. <S> So, say you use an 8-way OR gate between the 8 buttons to generate the clock? <S> Then, you get a rising clock edge every time any button is pressed, which latches in the new button state. <S> If your OR gate propagation delay is longer than the set up time for the flip flop, the timing should work out OK. <S> You may run into switch bounce issues and need an analog debounce circuit on each button, but I think it will kinda inherently debounce the signal. <A> If you want to do it with momentary switches you could do it with a couple of IC packages (flip-flops or cross-coupled gates) to have a 3-bit S-R latch, and then drive the latches on and off with diodes or gates such that pressing any switch will force the 3-bit output to the desired state. <S> You would need 6 resistors total and 3 diodes per switch, for a total of 24 diodes, or you could use 6 8-input NAND gates cross-coupled and 8 resistors (and no diodes). <S> Eg. <S> (about 1/3 of circuit partially shown) simulate this circuit – Schematic created using CircuitLab <A> I don't know how many i/o lines you have to work with from your micro. <S> But I've done this before using a remote control key board. <S> The switches are wired in a matrix. <S> If you can't program pull ups like you can with the MSP430, you would have to add those. <S> The beauty of this is two fold. <S> You don't need an external device. <S> And you don't have to scan if a key is not pressed. <S> Keep all your outputs low and set interrupts on your inputs. <S> Let a low trigger on the inputs start a scan. <S> First switch detected closed, wins. <S> The you can stop scanning and let a high going triggered interrupt let you know when the switch is released.
| Easiest solution - use a SPDT switch on each of the select inputs (A,B,C).
|
Why current appears at reverse bias? I am simulating time-domain graphs of a Schottky diode model. I am expecting ~zero current at the negative half cycle. But I am seeing negative current. It's only about 3mA, but based on I-V curve, it is about 0. Does anyone have an idea what might go wrong? diode model:(from datasheet)Is=30e15 Rs=8 N=1.05 Tg=1e-9 Cjo=0.1e-12 Vj=1 M=0.5 Fc=0.95 BV=5 IBV=1e-5 Eg=0.69 <Q> At that frequency you have this... <S> simulate this circuit – <S> Schematic created using CircuitLab <A> You will have some reverse leakage current for all diodes, for instance a 1N4001 is specced at \$5\mu A\$ at \$25\circ\$C. <S> But that's not what's happening here. <S> The more likely culprit is the diode's turn off time. <S> Your diode appears to be a silicon diode from the 0.7V knee, and silicon diodes have a non-negligible switching time, from a some \$ns\$ to several \$\mu s\$ or more ( more info ). <S> That's exactly what your circuit looks like, as it seems the diode isn't turning off at all when you push it negative, and at 14 or so GHz (where you're at), this becomes VERY significant. <S> For simulation, you'll want to alter your diode model, and if you actually intend to build this, you'll need to pay very careful attention to the type of diode you select. <A> The diode negative constant current effect is shown here with a mainly negative biased sawtooth with autoscale. <S> 171.44 nA leakage. <S> but always reported as worst case leakage at max PIV
| The phase shift is obviously diode junction capacitance just as any RC filter.
|
What exactly is prepreg and core in a PCB? I am trying to wrap my head around the structure of a multilayer PCB, and while I can understand many things, I am not able to grasp the concept of "prepreg" and "core". What do they exactly do? I have attached a reference stackup below. Only thing I understand about them is that they are used to glue the layers together. But why both, why not only "prepreg" or "core"? How do they differentiate from each other? Could you please demystify these things for me? Any good reference to understand this and how the layer stackup is determined is also appreciated. <Q> The important difference is this. <S> Core is a layer of FR4 with copper either side, that's made in a core factory. <S> The layer of FR4 is formed between two smooth foils of copper, to a specified thickness. <S> This means the thickness of the prepreg varies with the height of the etched boards either side of it. <S> For applications where the dielectric's physical properties are important (as in high-frequency transmission lines and antennae) you get much better repeatability with signal and ground either side of a core, than if the fields go across pre-preg. <S> Choosing which layers are made in which way can affect the processing steps and so costs if you are building a board with buried vias. <S> It's easy to drill holes through cores to get buried vias, but this restricts which layers can connect to which. <A> A core is a thick, more rigid layer of glass fiber while a prepreg is a thin layer of glass fiber/copper laminated onto a core. <S> In the past, there only ever was one thick core, so the distinction made a lot more sense than today, when they are roughly the same thickness. <S> There's still a difference in how vias are handled <S> but you better refer to the complete stackup in question instead of making assumptions about how ''core'' and ''prepreg'' vias differ. <A> From this link : Prepreg, which is an abbreviation for pre impregnated, is a fibre weave impregnated with a resin bonding agent. <S> It is used to stick the core layers together. <S> The core layers being FR4 with copper traces. <S> The layer stack is pressed together at temperature to the required board finish thickness. <S> Prepreg comes in different thicknesses.
| Pre-preg is a layer of uncured FR4, that's used by PCB manufacturers to glue together etched cores, or a copper foil to an etched core.
|
Is 52C too hot for this LED? I got three UFO / Saucer LEDs from China. According to the seller on ebay they are produced by Ranpo, but the LEDs are as unbranded as it gets. They are also, according to the seller, 30W - but the LED itself is silent on that as well. Anyway, the LEDs get somewhat hot. I measured 52C on the top "golden" area. Is that too hot, or is it fine for this wattage? <Q> Reliability is ~doubled for every 10'C reduction above room temp . <S> unless there is a process/design flaw. <S> It's "Watts on the junction in temp that counts. <S> Tjcn = <S> Pd <S> x <S> ['C/W] on the thermal resistance interfaces between internal chip junction to the ambient hot surface. <S> Lumen ratings of chips are often done at 85'C junction <S> but it is up to the Luminaire design to add the cost of making a low thermal resistance to the ambient surface. <S> It could a few 'C/W above the outside surface temp in the junction unless attention details with copper, extreme coplanarity and grease under pressure. <A> Most modern COB LEDs can operate up to 80C but anything cooler will improve the life expectancy of your LED. <S> In your case the LED module is probably mounted in way that it can dissipate its heat to the metal casing so if the metal casing is 52C <S> the LED itself will be slightly warmer but most probably still within its operating range. <S> But cooling those with for example a fan might cost more (buying the fan and running it) compared to replacing the LED a little earlier. <S> Chances are that the fan will break down before the LED. <S> PS : found that these use 5730 SMD LEDs which also have a operating temperature up to 80C <A> Usually an LED's temperature is related to its power rating. <S> At 100% of full power a bank of LED's can get very hot so 52c is not unusual. <S> However you can expect them to fail after 3 to 5 years of continuous use. <S> Drop the power to 80%, and the temperature drops and the life expectancy goes up a lot. <S> At 50% of full power an LED should run for 50 years continuously. <S> At 10% of full power they can run for hundreds of years. <S> Do your best to keep them cool if you want a long life span. <S> Consider using a buck transformer to drop the voltage 10% to 20%. <S> They will still be very bright but last much longer, assuming ambient temperature is around 25C.
| If those LEDs were very expensive it might be worth considering cooling a little for a longer life.
|
How do I make my electrical enclosure waterproof, what type of connectors I use? We are designing this meter which will monitor house hold consumption. The problem that I am facing right now for days and weeks now, how do I make the meter water proof. I have tried everything but could not find an exact solution. I tried looked for already made enclosure box that we can use they are a bit expensive as we need bigger enclosure (>240mm) + I dont know who the whole can be air tight when we passing electrical wire through it. We are also exploring molding option but I dont know what exactly I need. For do I place conducting strips in the mould already which will be my connection point from inside and outside? Or do I drill holes in it and later cover it with some special connectors. Any help is appreciated. I cam across these types of enclosure (highlighting the joint), should I be using something similar. How do they provide water proofing. What is recommended technique to create water and dust proof isolation for a meter? Included is my meter design where I need help regarding water proofing. <Q> Nothing is water proof, not even submarines (sometimes) unless you go for a fully hermetically sealed design and this doesn't sound like an option to me <S> so, you have to design to meet a certain IP rating (where IP stands for "ingress protection"). <S> This is used in the EU but Nema (see below) is used in North America: <S> - Then you have to choose appropriate cable glands for incoming cables. <S> Water will get in no matter how hard you try and stop it and <S> humidity/temperature changes are quite challenging. <S> In the end a lot of designs use a drain plug to allow water that has entered by osmosis or capillary action to drain away harmlessly. <S> If you are in the US this comparison chart may be useful: - <A> You probably need an enclosure rated IP65, which is the normal outdoor standard. <S> This is sufficient except where there is persistent driving rain, such as on an exposed hilltop. <S> Cable entry is through waterproof glands, such as the one shown. <A> You mentioned connectors in your question. <S> If literal connectors, whatever you choose must have the same or higher IP rating and be installed per connector manufacturer's instructions. <S> There are many from which to choose; Turck has a series of connectors that work for my applications. <S> Whether you use cable glands or connectors, try to locate all entries on the bottom surface of the enclosure to prevent moisture ingress. <S> Never place entries on the top surface for wet installations.
| If they have to be on the side, then a drip loop in the cable will help prevent water from following the cable into the gland/connector. They have to have the appropriate ratings too. You also have to consider the temperature range that the box may be subject to and analyse how much humidity can be present.
|
is it electronegativity or a difference in charge which produces current In a battery are electrons generated at the negative electrode due to chemical reactions and do they flow to the positive electrode due to a difference in electronegativity of the electrodes? Is an electric field generated as a result? I always thought it was due to a difference in charge between terminals which created an electric field from positive to negative, is this wrong? Thanks <Q> More apt chemistry term to be taken into account is standard reduction potential. <S> Not electronegativity. <S> It's a measure of ability of an atom to get reduced i.e., gain an electron. <S> The electrode with more reduction potential is taken as cathode, where reduction takes place. <S> And the electrode with lesser reduction potential is taken as anode, where oxidation takes place. <S> In a battery, both reduction and oxidation reactions takes place simultaneously to produce current through the external circuit (known as Half reactions, and together known as Redox Reaction). <S> So that at anode electrons are generated , and at cathode these electrons are gained. <S> The difference between the reduction potentials of cathode and anode makes up the cell potential or voltage of the battery. <S> This is responsible for the electric field from cathode/positive to anode/negative of the battery. <A> If you change "electronegativity" into "standard potential", both of those explanations are true - the first one is from a chemical standpoint, the second from an electrical one. <S> Due to this chemical process, there are more electrons in the negative electrode than in the positive one. <S> This of course creates an electric field between the two battery terminals, resulting in a voltage between them, which can be used by an attached load. <S> Due to the electric field, current flows through the load, transporting electrons from the negative electrode to the positive one. <S> The electrochemical processes in the battery will then replace the missing electrons in the negative electrode and pull out the extra ones from the positive electrode, thereby keeping the flow of electrons through the load going. <A> Do they flow to the positive electrode due to a difference in electronegativity of the electrodes? <S> Electronegativity from here : - Electronegativity is a measure of an atom's ability to attract the shared electrons of a covalent bond to itself. <S> If atoms bonded together have the same electronegativity, the shared electrons will be equally shared. <S> If the electrons of a bond are more attracted to one of the atoms (because it is more electronegative), the electrons will be unequally shared. <S> If the difference in electronegativity is large enough, the electrons will not be shared at all; the more electronegative atom will "take" them resulting in two ions and an ionic bond. <S> As far as I can tell this has little to do with free electron movement as seen in the flow of current. <S> Electronegativity is the force that attracts atoms to form covalent bonds.
| The difference in the electrodes' standard potential forces electrons into the negative pole of the battery and pulls them out of the positive one.
|
Should I step-up or step-down the voltage if I have the option? I'm building a series of gadgets for my kids that mostly (microchip, sensors) run off 3.3 but there are some switches and possible servos that run off 12v. My original idea was just to have two wires, one for 12v and one for 3.3v but I don't like that since most of the items won't need the 12v and it adds clutter in the wiring. The other option is to only have one wire and either step down or up the voltage in the components that need it. The heat from stepping down 12v to 3 worries me and since most will be running of 3v it seems more logical and economical to use step ups when I need the 12v. Is this the best approach or is there another approach that I'm overlooking? Most of the 12v components are buttons with built in leds so power consumption is pretty low, I guess they are mostly used in auto applications so most of the leds require 9-12v to light up well. <Q> This is a no-brainer. <S> Bus around 12 V and step that down to lower voltages locally as needed. <S> The heat shouldn't be a problem. <S> Buck switchers in that voltage range should be over 90% efficient. <S> Even figuring 85% to be pessimistic, there will be very little heat to worry about. <S> Let's say your control circuitry is drawing 100 mA at 3.3 V. <S> That's quite a lot for a modern microcontroller and a little surrounding circuitry. <S> 100 <S> mA times <S> 3.3 V is 330 mW. <S> With a 85% efficient buck regulator, it will draw 388 mW from the 12 V supply, and dissipate 58 mW as heat. <S> That's so little that you'll barely notice it being warm when you put your finger on it. <S> It seems your high power devices run on 12 V. Due to the higher power, this is where you don't want something between the power source and the device drawing power. <S> Put another way, you get to pick one supply that can be used with 100% efficiency. <S> It should be the one that needs to provide the most power. <S> It is also useful to bus around a higher rather than lower voltage. <S> At the same power, the current will be lower, meaning you can use smaller wire for the same loss. <S> Not only will the voltage drop at 12 V be lower, but it will be easier to tolerate. <S> The 12 V can vary a little if you're just running a motor with it, but some digital ICs require a more tightly regulated supply voltage. <S> Here is a schematic snippet from one of the many projects where I make 3.3 V locally from a higher supply voltage that is bussed around: <S> I use this basic building block quite a lot. <S> In this case the input was 24 V, but it would work just fine from 12 V and even lower too. <S> That should allow combining them into a single cap, like 22 µF <S> and 20 V. To make different output voltages, all you need to do is change R8 and L2. <S> For example, to make 5 V, use 22 µH for L2 and 52.3 kΩ for R8. <S> I have used this basic circuit in quite a few projects. <A> Sounds simple but a standard step-up (boost) or step-down (buck) <S> don't normally work adequately when Vin = Vout. <S> So, my aims would be: - Input supply voltage transparency - if 3 volts is needed then make a seamless solution that will produce 3 volts if the supply varied from 3 volts to 12 volts. <S> If 12 volts is needed for stuff then it's best not to put a regulator in this line (unless of course it's needed). <S> If batteries are used as the power source make sure you have a solution that can run down to lower than 3 volts at the input. <S> I'd use a buck-boost chip for the 3.3 volts like this one: - The picture above shows a 5 volt output but, by altering the ratio of the resistors attached to the FB (feedback) <S> pin, you can get 3 volts or 3.3 volts. <S> The benefit here is that even if you fed it 3 volts it would output 3 volts so you don't need to worry about switching in or out your buck regulator when you are powering it at 3 volts or 3.3 volts. <S> Like Olin says, the higher power side will need 12 volts so regulators in this feed are likely to be less efficient than just providing a straight 12 volts. <S> The other benefit from the LTC3129 is that if you are using batteries, it will still work down to 2.42 volts. <S> The only limitation is that devices fed from this are limited to a maximum current of 200 mA. <S> If this is a problem there are higher output current solutions. <A> You should also look at the variety of regulator cards that pololu.com offers. <S> They have step-up, step-down, and a combo of the two to stepdown as your source is higher than the desired output and then stepup as your source descreases lower than the desired output. <S> Those switching regulators are great, much better than simple linear regulators.
| With 12 V nominal input, C11 and C12 can be lower voltage. The other option is to only have one wire and either step down or up the voltage in the components that need it.
|
Why my hex inverter chips (all three of them) are not producing a HIGH output with low input? I wont say I am great at reading datasheets, but I noticed that for 74LS06 chip, Ioh is 0.25 mA and Iol is 40mA. Does that mean it can light-up an LED when its output is low but not when its output is HIGH (a forward biased LED of course)? If yes, does it mean that all those chips whose Ioh or Iol is lesser than required to light-up an LED can't glow the LEDs even when it is forward biased? Please help. It means a lot. Also, I have tested with multimeter, the output voltage of three 74LS06 chips with low input is 0.04v. <Q> If you look at the data sheet for the 74LS06 , you will see the following. <S> Note <S> it says the outputs are open-collector. <S> That means it can sink current but can not source it. <S> (Well it can actually source a little current as leakage or more if you pull the output below the bottom rail.) <S> Looking at the internals confirms this. <S> If you compare that to a push-pull output like the 74xx04 <S> .. <S> You can see that the output has two transistors. <S> The bottom one is similar to the output of the open collector device and is turned on when the output is low and pulls the output down, sinking current. <S> The top transistor is turned on when the output is high, it sources current and pushes the output high. <S> Hence the name push-pull output. <S> You can use the open-collector device to pull current down from Vcc through an LED if you so desire, but do not exceed the maximum output low sink current. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> As <S> Gee Bee mentioned, you can also do it, without the inversion, by shunting out the LED, as shown below. <S> However, this method wastes power since current will always be flowing through the resistor. <S> It actually uses more power with the LED off than when it is on. <S> So I do not recommend this method. <S> simulate this circuit <S> Do not try the shunt method with a push-pull invertor though. <A> You've picked a part with an open-collector outputs. <S> The IOH output-when-high current is more of a leakage current when the output is disabled. <S> The output high current is normally provided by external circuitry i.e. a pull-up resistor. <S> Modernise your choice of logic family a little and look at a 74HCT04. <S> That can source or sink 4 mA which is fine for a low-current LED. <S> Or make an active-low driver and make the most of the high sink current capability of your 74LS04 or 74LS06. <S> Connect your LED-and-resistor across your supply rail and the gate's output. <A> @TonyM is right. <S> But, iff for some reason you are stuck to using this part with OC output, you can connect a resistor (e.g. 470 ohm) between 5V and the output, and the LED between the output and GND. <S> When the output is "unconnected" (aka high), the LED can lit. <S> When the output is low, it will short the LED and the led is off. <S> (Note that during this "off" mode a small current flows trough the resistor, which must be less than 40mA to keep within the safe Iol range.) <A> Ioh of open collector = 0 and only possible with Pullup R where 0.25mA is wrong read again. <S> It is Vol = <S> 0.25V <S> typ <S> @Iol=16mA which implies low side switch with Rce=0.25V/16mA= 16 <S> Ohms and high side is 0mA unless pullup R. So using Common ANode CA 7-seg. <S> current is If=(Vcc-Vf_led)/Rs <S> e.g. lets say Vf=2.2V @ 10mA <S> and Vcc=4.75V then Rs=(Vcc-Vf)/If= (4.75-2.2)/10mA= 255 <S> Ohms <S> ( choose closest for 5V) <S> simulate this circuit – <S> Schematic created using CircuitLab
| On an open-collector device, with no pull-up on the outputs, you will not be able to measure much voltage on the outputs when the outputs are high.
|
capacitor ground or not? I recently started developing an interest for electronics (doing mods for my car, Delayed lights etc...) so I decided to sign up here. I have some questions about capacitors (which are very new to me). My question is since they have got +/- sides, do I need to ground them or does this work on hot side only like the image, say the bulb in the image is car park lights. As far as I know, this should take about 1.5 sec to reach 12V. <Q> The voltage on the positive terminals of polarized caps must always be greater than the negative terminal. <S> What voltage the negative terminal is at is not significant. <S> What it functionally does here may not be what you intend. <S> As shown, the LED will light when you apply power then dim out shortly after as the capacitor charges up. <S> Subsequently it may never light again, or not for the very long time it takes for the capacitor to leak it's charge. <S> Further, if your LED is actually the car lights, the 1K resistor will not provide enough current to light them. <S> edit: here is a rough diagram of what you get once the capacitor charges up. <S> the voltage on both sides of the LED will be the close to the same, so no light simulate this circuit – <S> Schematic created using CircuitLab <S> For something like you are describing you would need something like this.. <S> (Ignore the two switches.. <S> they are just for testing in the schematic editor) <S> simulate this circuit <S> When you switch on the circuit C1 charges up through R1. <S> When the voltage reaches the threshold voltage of the N_MOSFET the latter turns on which switches on the P-MOSFET turning on your lights. <S> Delay time is set by the R1, C1 combination and also depends on the gate threshold of the N-MOSFET. <S> You want the latter to be closer to 6V than 1V. <S> Making R1 a 1 Meg pot will allow you to adjust the delay. <S> When you open the switch, the capacitor will discharge, initially through the light till M2 turns off, then though the diode D1 and the smaller resistor R3. <S> R2 simply biases M1 off. <S> If you want to build this, you need to use MOSFETS that are designed to handle automobile transient voltages. <S> Here is simpler version that only uses a P-MOSFET. <S> It works sort of the same way as the previous one but the charge circuit is reversed. <S> The switching edge with a single transistor is much slower though and may cause some flickering in the lights as they turn on. <S> simulate this circuit <A> Try this instead. <S> Note 1: <S> This is an extremely simplistic and poorly controlled design, but it's a quick way of achieving your goal. <S> Note 2: <S> If what you have labelled as an LED is actually an LED, you will need a resistor in series with it or else it will draw too much current and burn out. <S> If it's a lamp for a car, it most likely does NOT require a resistor, which is why i've shown it without one here. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The reason your designed circuit won't work as you want is because once a capacitor is charged, current no longer passes through it. <S> And your lamp needs current to emit light. <S> Here's a trick - to find out what a circuit does after a long time, you can just delete the capacitors from the circuit. <S> In your case, that means the lamp is no longer connected to anything, so of course it will be off. <S> Regarding your original question about capacitors:"Ground" is an arbitrarily selected reference point that means 0V. <S> In general, absolute voltages never mean anything - all that matters is the voltage DIFFERENCE between the two terminals of a device. <S> So for capacitors, if a capacitor is polarized (has a + and - node), then all you need is to make sure that the voltage at the + node is greater than or equal to the voltage at the - node. <S> You do NOT have to connect the - node to ground. <A> You have a high-pass pulser. <S> Fast rise and predictable delay. <S> Is that what you need, or are you just playing to learn? <S> This circuit used 1nanoFarad and 160 ohms.
| What you have is technically correct. ANY point in a circuit could be declared as the 0V "ground" point without affecting how it works.
|
How to design LC filter for smoothing out sine wave from sine PWM? I am making an inverter to convert 12 DC to AC(~8.5Vrms) @50Hz. Later i will do the same for 400V DC to 250Vrms.I am using IRF540N and IRF9540 mosfets as switch. And IR2110 mosfer driver for gate control. I get SinePWM from arduino with carrier frequency of 15kHz. And the whole project work successfully till this stage. But now i am confuse about how to filter this PWM to pure sine wave. I know i can use low pass filter but how to choose value of L and C?And is there more than one topologies to make filter?(i read on internet but i am not sure)Also does this L and C's value depends on how much load i have connected? <Q> Numerically it is \$\sqrt{50 \cdot 15000}\$ = 866 <S> Hz. <S> It's not a hard rule but something to get you started. <S> So this constrains L and C somewhat but loading effects can reduce the Q of the LC circuit and this is probably best done using a simulator like micro-cap 11 or LTSpice (both free). <S> Just model the circuit with with highest and lowest loads and see what happens to the shape of the sine wave and its amplitude. <S> I'd consider starting with L = <S> 10 mH and C = <S> 3.3 uF (876 Hz) with a little series resistance with the inductor to avoid Q factor rising too high on low output currents. <A> You first focus on the PWM gate driving. <S> The output current should already match the 50Hz sine close enough. <S> Your LC filter will never filter at 50HZ to make your output a pure sine. <S> Your output LC filter is there to filter the PWM enough <S> so you achieve the ripple current or voltage that you want. <S> So you need a spec for the ripple. <S> If you do not have a spec, then there are some rules of thumb as in ripple current is 10% of peak current. <S> I talk about current as you can, even without load. <S> Your LC filter will make output current (from the FET bridge) possible. <S> From there it's only calculus. <S> -3db freq of LC filter is at 1/(2.pi.sqrt(L.C)). <S> From there it rolls of at 40db/decade. <S> Start with an easy to find capacitor that can handle the ripple current and calculate the inductor. <S> Might be an iterative process to find a realistic inductor. <S> The actual -3db point also does not have to be precise. <S> If you calculated for 10% ripple, it's no bug deal if it ends up to be 5% or 20%. <S> Other filtering you will most certainly need are snubbers at the FET's. <A> Use an Ott filter to convert your PWM output waveform to a sine wave. <S> The Ott filter input will always be capacitive which your inverter can tolerate. <S> Other low pass filters allow an inductive input impedance which can damage your inverter.
| Normally, a good starting point for the LC filter is choosing the cut-off frequency and that can be found by finding the logarithmic half-point between 50 Hz and 15 kHz (in your example).
|
power to a 230v three phase motor a. Is it possible to run a 230v three phase motor with a 115v input VFD? Where does the other 115 volts come from in this case? b. How would I size the VDF for this case. I know my VFD will need to rated higher than the the FLA of my motor but by how much? Thanks <Q> possible <S> yes. <S> It won't make 230V you'll be limited to one quarter power. <S> you need to know how much mechanical power you need. <A> There are VFDs on the market that accept 120 volts, single-phase as input power and produce 3-phase, 240 volts as maximum output power. <S> They presumably have an internal voltage boost circuit. <S> There is plenty of online information describing voltage boost circuits. <S> If you can get 240 volt, single-phase, note that virtually all VFDs convert the AC input to a fixed DC voltage. <S> Therefore using single phase input for a VFD that is designed for three-phase input is usually a matter of determining how much the rectifier needs to be derated when only two thirds of the rectifier is in use. <S> Also the DC bus filter capacitors need to be evaluated. <S> Many manufacturers publish the rating for single-phase input at the normal input voltage. <S> Some who do not will provide the information upon request. <S> Others have design details that prevent single-phase input. <S> You can make an estimate of the derating factor based on assuming the current per input phase does not need to be reduced. <S> That would make the derating factor 1/ square root of 3 = 0.58. <A> While it is POSSIBLE to use a COTS 230V VFD capable of accepting single phase input and fabricating your OWN voltage doubler system ahead of it from scratch, knowing exactly how to interface that with the VFD you chose and make sure you connect ahead of the pre-charge circuit makes it something I never recommend.
| Unless you buy one already built to do this, it's ALWAYS simpler and cheaper to just buy a 115-230V transformer and wire it ahead of the VFD.
|
Zero Crossing Detection of ~ 400 kHz Signal with MCU I want to measure the frequency of a signal using the digital input pins of the NUCLEO-F767ZI. The signal is sinusoidal with an amplitude of 5 V and a frequency ranging from 100 kHz to around 400 kHz. 1.) First I thought about simply feeding the pure analog signal to the input pin that is 5 V tolerable. I thought about using a serial diode for protection against the negative half cycle and using the internal pull down resistor of the MCU. Then I could generate an interrupt whenever the sinusoidal signal is high enough for the GPIO to recognize it as logical HIGH. simulate this circuit – Schematic created using CircuitLab 2.) After a bit of research on StackExchange, I also found configurations using opto-isolators: Detecting Zero Crossing of Mains (Sine and Square Wave) The advantage is that it would output a sharp rising edge easily recognizable for the digital input pin, rather than the limited slope steepness of a sine wave. 3.) Since the signal does not have a dangerously high voltage, I could also skip the isolation and use a simple BJT or MOSFET instead. This would also output a sharp rising edge. simulate this circuit Which of the above options would you recommend? And above all: I hope that the parasitic capacitances of the semiconductor devices do not have any effect below 500 kHz, is that right?Or do you have a different and better approach? Best regards and thanks in advance! <Q> Since all you need to do is limit the frequency and convert the arbitrary wave into a square wave for the schmitt triggered timer capture input pin. <S> The most basic approach would be this absolute comparator based on a fixed reference, without any hysteresis. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> You could even use an on-chip analog comparator, if you can connect the input capture event to the comparator signal internally. <S> There will be caveats with this approach, but you didn't provide much details about the signal source and it's (common mode) levels. <A> If you are dealing with an analogue signal and trying to convert it to a suitable square wave for frequency measuring you have to consider the effects of noise and implement some form of hysteresis so that at the threshold point (where the circuit arbitrates between 0 and 1) there isn't oscillation of the digital output. <S> The above picture taken from here <S> and it hints at using a schmitt trigger like the one below (I have used this circuit several times) <S> : - It works from 3 volt supplies or 5 volt supplies. <S> The line-in capacitor is to remove any DC component of the input. <S> The capacitor on the inverting input filters the signal so that what appears at that input is Vcc/2. <S> The picture comes from here Turning the output of an opamp into a square wave . <A> You could try zero crossing detector. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> For most of the time (positive half cycle) <S> Vin <S> > <S> > 0.7V hence the Q1 is in saturation and the output is low (GPIO). <S> As input signal "approaching" zero <S> the Q1 turns <S> -OFF and the output voltage goes "high". <S> And it stays high until the input signals turn-ON Q2 and Q3 (negative half cycle). <S> And this will happen around -0.6V volts. <S> So, for the remaining part of a negative half cycle, the output is low (Q2 and Q3 Turn-ON). <S> But to be honest I never try it at such a "high" frequency. <A> TLP184SE shown below. <S> These type of couplers have a "current transfer ratio" that specifies how much of the input LED current is converted into output transistor current. <S> Just use the correct size resistor to set the LED current <S> and you get a full wave rectified output at the other side. <S> No extra power supply is necessary. <S> You decide the output voltage by the pull up voltage used on the output resistor. <S> The LED coupling also provides some "edgification" of the signal because they do not conduct until around 1.2V, so there is zero output when the input is between -1.2V and +1.2V. <S> You will of course have to divide the frequency that you measured by 2 because you are now counting both halves of the pulse. <S> You should also take a look at <S> the STM General Purpose Timer Cookbook , which shows how to use the internal timer/counter logic to process external signals without CPU resources.
| You could use a comparator (or any opamp basically). A very simple and low cost (< $1) solution that also has the benefit of electrical isolation is to use an AC opto copuler like the
|
How to run a fridge on off grid system So I live off grid and have a 12v solar system with an array of batteries and a 12v 45amp MPPT controler. I also have a 1000w inverter for things such as powering my laptop and things. I have recently bought a fridge however I am having some issues powering the device as when the compressor starts it will surge at around 750w. My MPPT controller does not go this high (and I have not been able to find any that will) and so the compressor will not start correctly and just make clicking noises while the inverter complains that it does not have enough power. I have tried removing the MPPT from the circuit and just having the batteries go directly to the inverter which seems to work fine however it means I will have no undercurrent control over the batteries. Ideally I would like everything to go though the MPPT controller. I am looking into soft start devices such as this one to slow the start up draw of the compressor however I am not sure if this will allow the compressor to start correctly. I was wondering if anyone has any ideas or solutions to to this problem. Thanks! <Q> So you can connect the battery directly to the inverter. <S> This is how it is normally done in off-grid solar appplications and the MPPT charge regulator is only for the DC loads and regulating overvoltage while charging. <S> Otherwise you would need to always oversize the charge regulator. <S> See an offgrid circuit schematic of one of my projects <A> If you try to limit inrush current, the compressor will stall. <S> Nothing short of running it off a (expensive) VFD will help. <S> You will need to get a different fridge. <S> Either get a 12V fridge (which will have a similar startup draw, but your batteries can handle it) or get an inverter compressor refrigerator (pricey, but efficient). <A> The battery should take care of the surge. <S> This does mean you need to connect the high current loads directly to the battery, and that does mean, as you say, that you lose the undervoltage protection. <S> I would be looking at a high current solid-state relay, controlled from the charge controller's switched output, if you can find one capable of handling the high current load. <S> That way, when the charge controller shuts down its switched supply, the relay disconnects the fridge too.
| The inverter itself if it is thought for offgrid application should have the undervoltage protection. The high startup draw of the fridge is not caused by inrush current or motor startup current, it is caused by mechanical load on the compressor as it gets the refrigerant moving.
|
What's this triac-based device that controls the lamp in my ceiling fan? I have a Harbor Breeze (Lowes) ceiling fan with a defective light, but with a working fan motor. (A tree fell on my service drop, cutting the neutral wire but not the ground or 240V line wires to my house, and a number of devices failed.) I was getting 90VAC (should be 120VAC) in the sockets with no lights connected, but with 3 8W LEDs installed that drooped to 6VAC, and with one 2.4W LED candelabra bulb I get inconsistent voltage and a ~2 Hz flickering, blinking light. I opened the fan to find this device between line in and the light: It's not the motor speed capacitor, that's in the top part of the fan and seems to work fine. I get 3 speeds with this device removed. It consists of: An ST T4 1060 triac , which has a bunch of vias, and a thermal plane with a gray silicone thermal pad and paste beneath that connects to the aluminum heatsink this was mounted to. A big yellow SR Cap 0.68 uF/280V cap A smaller yellow SR Cap 0.1 uF/280V cap A 470 uF/10V electrolytic A blue Bourns 240V MOV An adjacent large (0.5W?) gray through-hole resistor R11, banded black/black/black/brown, with some overheating discoloration. Measures as an open circuit. A tan 100 ohm resistor, about 0.25W A TSSOP-10 device with the label (8?)202 937PC, function unknown 4 1206 package 100K resistors, 2 SMD diodes, and one glass zener diode (A rectifier/cheap voltage regulator circuit?) 2 SOT23 devices, one labeled 'D4' ~20 0402 resistors and caps around the TSSOP The black line in wire connects to the MOV and thence through R11 to the capacitors, and the red light out wire connects to the output pad of the triac. The white neutral wires on the output are shorted together. What is (was) this module supposed to do? It has no input for the fan state, and I don't see why I'd want my lights to run off a fancy triac setup when there's a pull cord on/off switch downstream of this device. The lights are 120V candelabra bulbs, there's a pull-cord switch, and I'd rather have lights and no fan than fan and no lights...so I removed this, connected the line in that used to go to this to the light, and the whole system works. Lights switched by the pull cord, fan goes through 3 speeds with the , lights, reverse switch. Am I just reducing the Energy Star efficiency/power factor of my fan? ...But how, if this is just in parallel to the motor and in series with the light switch? I don't think I need to replace it - everything seems to be working fine - but should I? Where do I get one? What did it do? Edit: Partial schematic: simulate this circuit – Schematic created using CircuitLab It's difficult to trace what's going on with all those tiny SMDs in the U1 cluster under all the conformal coating. Some decoupling caps, some 10k, 2k, 1k resistors, and some kind of logic IC...but not really sure. Really more interested in what purpose it could possibly serve to have some unknown IC modulate a triac between line in and the rotary switch for my lights. Other writing on the PCB, front side: 2008.02.28HT-207-07097A-PC-V06J1 J2 J3 A1 G TR2 D4 D3 D5 ZD1 ZNR1 R11 R22 C11 R24 etc.WHTITE [sic] N N RED LAMP BLACK LOAD Back: CODE QC QCICT PbF ZPMV2RU E252098RH-394V-04309X1 X2 RST 5V GND Also, there are a few areas of traces coated in solder (presumably for higher current capacity) and what looks to be a routed grove filled with white epoxy - or maybe just thick silkscreening - for isolation on the reverse. I also noticed that either they marked the edge under the blue MOV with a sharpie, or the FR4 was blackened (the reverse has thick traces from the black wire to the blue MOV). <Q> These are used to turn on / off at different points in the AC waveform for dimming. <S> The angle in the AC input where the SCR is turned on is called the delay angle. <S> Using this scheme effectively changes the RMS voltage to the load, which effectively changes the power. <A> It looks a lot like it's intended to control the lamp. <S> possibly there's a special combined light-switch and fan control that operates over only 2 wires so it can be retrofitted in rooms where there is not enough installed wire to control both the lamp and the fan on separate circuits. <A> It’s the circuit board for a remote-control receiver. <S> Lowes sells a couple models that include that: https://m.lowes.com/pd/Harbor-Breeze-Saratoga-60-in-Oil-Rubbed-bronze-Indoor-Downrod-Mount-Ceiling-Fan-with-Light-Kit-and-Remote/1000107707 <S> Bypassing it won’t hurt anything except your ability to use the remote.
| Looks like it might be a controlled rectifier for the light.
|
can LM358 Output voltage be made equal to supply voltage? I am using one op-amp of LM358 IC as a non inverting amplifier (Vcc-5V). Its input is from an IR proximity sensor. IR emitter used is SFH4350 and photodiode used is TEFT4300 also operating at 5V. I am pulsing my emitter at 15KHz using the second op-amp of LM358 IC as oscillator. Thus I also have to add a HPF (fc-11.5khz) at the input of amplifier part. I am feeding the output output of amplifier to an ADC which is reading values only up till 0 to 3.3V max. I am aware that LM358 is not a rail-to-rail op-amp, but I want my output voltage range to reach near 5V (+/- 0.5V). Is there some way out through which I could achieve the desired output voltage using the LM358 or some other IC but of the same price range? Thank you all for your answers. I did a little research of my own and found the LMV358 which I think will do the job just fine.LMV358 is a Dual Low-Voltage Rail-to-Rail Output Operational Amplifier available at even a lower price than LM358. <Q> The sinking current output has a follower configuration which limits you to around 1v when pulling any significant current. <A> You are looking for a LM393 comparator. <S> When input offset is positive, the output goes to high impedance. <S> You can then insert a pull up between output and VCC. <S> Be careful with the offset between VOL and zero. <S> It is slightly larger than in the LM358 from my experience. <A> This is a common trap with cheap opamps such as LM358. <S> You won't be able to bring the output voltage all the way up to the positive supply rail. <S> In some cases you may solve this by increasing the supply voltage. <S> For example, if you really need the output voltage up to 10V <S> then you might use the LM358 with a supply rail of 12V. A comparator such as LM393 has an open-collector transistor output, so with a pull-up resistor it will swing essentially rail-to-rail assuming that the load impedance is high. <S> (If a comparator is appropriate for your application.) <S> It's not really true that rail-to-rail opamps have to be expensive. <S> (Of course, you do need to consider other factors such as the bandwidth needed, slew rate and other specifications for your application.) <S> As a cheap, basic example, there's the TLV271 <S> (Digikey TLV271CW5-7DICT-ND) which is rail-to-rail and low cost. <A> LMV358 (rail-to-rail opamp) should be adequate for the purpose. <S> Some ADCs I think allow you to scale the range with V ref, if it is possible to do that, you may be able scale vref to map the reduced range of perhaps capped at 0 to perhaps 1.8v (3.3v-1.5v)
| While you can get the output voltage of an LM358 up to +ve rail with a pull-up resistor, you can't get it down close to ground.
|
How to protect againt surges if working and maximum voltage are too close I have a circuit that I want to protect against surges and eventual peaks (e.g. indirect lightning surges) and I was thinking about TVS diodes or MOV. The situation is: -My circuit input is 48V that is stepped down to 12V with a MP24943 -The 48V comes from a standard converter (that I bought) connected on the AC line. -The maximum voltage accepted by the MP24943 is 60V ( datasheet ) -I want to protect the MP24943 and what is after it from surges but I couldn't find any TVS Diodes or MOVs that meet the requirements (for 48V the clamp voltage is something around 75V) I have already read this , and I couln't find any of the Bourns ICs mentioned here in Brazil (specific components are very hard or impossible to find), and the most cost effective solution for me would still be MOVs or TVS, is there any way I can use them in my circuit? Or to find a replacement for them? I tried Zener Diodes but no success too, the BZD27C Series ( datasheet ) for example still have a Clamping voltage too high for me. <Q> Otherwise you would be better to add a circuit that shuts off the power to your regulator if the source goes above the range you deem acceptable. <S> The design in this answer to another question may be more to your liking. <S> The circuit there is of course for 5V <S> but if you understand it, you can modify it to whatever voltage you need to limit to. <S> Watch you do not drop the gate of the MOSFET below its limit though. <A> The clamping voltage is always much higher than the working voltage and you must always select a working voltage slightly above your effective usage voltage. <S> If you want to protect 48V, select a 50V MOV or above. <S> If you want to protect 12V, select 14 or 15V MOV. <S> (In your case I would protect for both voltages, before and after it's stepped down) <S> Don't worry if the clamping voltage is way higher, it's normal to avoid false shorts. <S> TVS diodes are for data lines. <S> MOV are much more robust. <S> Take the highest peak current value. <S> Zeners are much too weak (but can help to some extent if there isn't anything else). <A> Good power supplies need great protection from PLT or power line transients that sometimes exceed the IEC standard test. <S> The energy in the transient can either be shunted on secondary which draws more stored energy or filtered by series high (L) impedance in the primary, which draws less power. <S> I recall in mid'80's we need high-rel supplies which were custom designed by , Brown and Hammond , to our specs, which included higher standards for PLT. <S> They choose to implement 2 stage line filters which consist of a higher inductance CM choke and a differential choke with twin Y caps as in a PI filter. <S> They also included MOV's and <S> one design also had a primary fused gas tube. <S> If this is a commercial design, you either need to have or rent, borrow or make your own PLT generator. <S> If this is an open frame 1U high 48V supply, I would recommend Lambda's having qualified them in the past. <S> ( vs PowerOne) <S> Both were about 70 cents per watt.
| Well first of all I'd make sure I bought a 48V supply that was guaranteed NOT to go out of some tolerance range that you can accept. If the supply is going to fail out of spec, a simple TVS or Zener will not last long. I suggest you investigate the weak link and add one of the above solutions.
|
Selecting Flyback Diode for Inductive Load I intend to run a 48V DC water pump which uses this motor ( http://www.leeson.com/leeson/searchproduct.do?invoke=viewProductDetails&motorNo=098382.00&productType=0 ) through a solid-state relay like this one ( http://www.crydom.com/en/products/catalog/power-plus-dc-series-100-dc-panel-mount.pdf - DC100D20C) and need help selecting a flyback diode. The pump motor is rated at 1/2 hp, and at its full load/RPMs (1800) can apparently draw 11.0 amps. But according to the pump retailer, with the head height I am dealing with it will probably be at around 170 watts or around 3.5A @ 48V. To build in a little margin for safety let’s say we are dealing with a 10A load, even though I am pretty sure it will never reach that level (even at the maximum head height supported by the pump, the retailer shows only 314 watts / 6.5A). From what I’ve read I understand that the flyback diode needs to be able to handle the exact same current that was flowing through the motor the moment it was switched-off (via the relay) since the inductor will want to continue flowing that same current (even after switched-off) through the flyback diode until that stored energy has been fully dissipated. So I know I need a 10A+ diode. But what about some of the other attributes: Breakdown Voltage: As I understand it, this is the voltage at which the diode will allow current to flow in the reverse direction. I don't think this should ever happen (right?), so the breakdown voltage should be at the very least higher than the expected battery bank voltage range. How much higher? Any sort of back EMF voltage spiking that might occur would be positive with respect to the flyback diode, right (and therefore the breakdown voltage is not applicable in that scenario)? There shouldn't (in theory) be any voltage spikes on the back/blocking-side of the diode. Although I guess if there were a voltage spike up-stream from the pump/motor I guess it would be better for that to flow through the diode (backwards) rather than the pump. Working Voltage: Does this need to be at or above the voltage range of the battery bank (i.e. 44V-52V)? Or does it need to be higher so as to accommodate voltage spiking? Or is it that with the flyback diode there is no voltage spiking (i.e. if the voltage is 48V with the pump switched on , then immediately after it is switched off it slowly decays from 48V down to zero via the diode loop)? Maximum Reverse Standoff Voltage - "the voltage below which no significant conduction occurs" ... from another S.E. post: "breakdown voltage is usually 10 % above the reverse standoff voltage" ... so it sounds like this is related to Breakdown Voltage above and as long as it is sufficiently high it shouldn't matter. Clamping Voltage: "the voltage at which the device will conduct its fully rated current" ... again, should this be low-ish? so that the full <10A can be flowed immediately with no restrictions? Or does this need to be 48V to ensure that current will only be cycled back through the motor at that voltage (and not at some other voltage that might damage the motor?)? Thanks in advance for your help! There are a bajillion different diodes out there to choose from, and I'm just looking for a little guidance on how to select the right one in order to prevent voltage spikes from damaging the pump/motor, solid-state relay, and/or other components in the system. Thanks! Update: How about the Vishay Semiconductor VS-T40HF10 ( https://www.mouser.com/ds/2/427/vst40hfseries-50776.pdf )? Rated for an average forward current of 40A, a reverse voltage of 100V, and a surge current of 600A. Relatively high forward voltage of 1.3V, and probably way overkill for my needs all around, but this would be installed in a remote/rugged location (outdoors, but protected) and I like that it is screw mountable and has screw terminals. I know I could get something that would work for like $0.30, but I also don't mind spending $20 for a more robust design that will stand up to abuse. Its classification as a "Power Rectifiers Diode" has me questioning its viability, but as long as it behaves as a diode and only allows current to flow in one direction then it should be fine. I'm not using PWM or switching this circuit frequently; probably on and off only once or twice a day. <Q> In your case I'd suggest a 100 V 10 A surge current diode would be more than adequate. <S> A lowly 1N4003 diode would be quite adequate at 140 V reverse voltage and 30 A non-repetitive pulse rating. <S> However something like this Schottky device (even more margin) is only 10-15 cents on Digikey or Mouser. <A> The type of diode you most probably need is a schottky diode. <S> Those are the preferred diode type for flyback diodes. <S> What you really should look for : <S> Reverse voltage > supply voltage (x2 or more to be safe) <S> High forward surge current (you can easily find one with x20 the load current) <S> Forward current near to the current of your load or more <S> I work as a technician for the railways and for our DC traction motors we use flyback diodes that could barely handle the forward motor current but <S> their forward surge current is very high and those diodes rarely break. <A> From the motor datasheet @9.5V <S> the loss is 107 watts which translates to around 0.9 ohms. <S> So you can expect a start surge of around 53 <S> A which is approx 5x the 11A max rating which is normal for a motor this size. <S> Unfortunately, contact bounce and power interruption can happen so the diode must be able to handle much more than 10 A and sustain some current until the pump stops. <S> So the inertia of the motor will generate more power into the diode than a lowly 1N400x can handle at uncertain times. <S> For a $2 you are far wiser to choose a diode that can handle 50A for some unknown duration often rated in n*60Hz cycles vs Amps which is probably a 200V PIV rating. <S> I would be surprised if the Crydom SSR does not have a zener embedded in its design to withstand this reverse voltage current as a diode in the forward direction. <S> The diode becomes the brake to the Pump motor when switched off. <S> How many Joules? <S> You might have to calculate. <S> I suggest a 40A Schottky Diode 200V <S> 30 mJ , 121 A square wave, 0.5 mA Reverse leakage. <S> TO-247 <S> ~$5 https://www.digikey.ca/product-detail/en/microsemi-corporation/APT30S20BG/APT30S20BG-ND/1494540 https://www.microsemi.com/document-portal/doc_download/6933-apt30s20bg-apt30s20sg-datasheet
| All you need is a diode with a forward pulse current rating equal or greater than the motor draws at full load, and a reverse voltage rating comfortably higher than your DC supply.
|
Why do two different ICs have wildly different high level output voltages? I have two ICs on a breadboard (74ls374 D type flipflop and 74ls04 hex inverter) and I have an arduino measuring the voltage on the respective output pins of those ICs. The outputs are both at a logic high, but the flip flop output voltage sits around 3.3V, while the inverter is at 4.5V. There is no load resistor or anything like that. Why might this be happening? EDIT: The reason I am using the arduino ADC is because I am a skint college student who cannot afford a multimeter at the moment. Arduino is the best I have <Q> The logic family specification defines min./max. <S> levels for H and L output states. <S> As long as both ICs satisfy the specification (which I assume is the case; i.e. in this case voltage is above min. <S> H output Level 2.7V) <S> there is nothing to wonder about. <S> The output voltages don't have to be exactly the same; they just have to be above the minimum level. <S> See e.g. \$V_{OH(MIN)}\$ "minimum guaranteed voltage at an output terminal" here . <S> So after it is clear that they don't have to be the same: here are some reasons for them not being the same: different temperatures (maybe even different elevated temperatures sometime in the past; e.g. by overload) output circuits inside ICs differ for some reason (e.g. because they have different max. <S> fan out) <S> ICs are made by different manufacturers ICs are made in different process technologies ICs are made at different plants ICs are made at different times <S> IC dies come from different wafers <S> The answer is similar to the answer for "Why may \$\beta\$ vary so much for two samples of a BC547C transistor?" <A> You give no enough information - you must check if they both are within specifications in terms of +5 V power supply at least. <S> Next, if you look into the datasheets of these devices <S> - 74LS374 <S> 74LS04 <S> You will find that typical Voh voltage is exactly around 3.3 V, which is valid TTL signal level. <S> Regarding this big difference - look at 74HCT04 , depending on the source of your component you hay have remarked HCT part, which outputs CMOS level. <S> I guess exact internal circuit may differ from manufacturer to the manufacturer, thus the difference in output voltage. <S> Generally not possible to say as you do not provide the manufacturer name, and pictures of the ICs. <S> It would also be really good if you use multimeter to confirm your Arduino readings. <S> You actually do NOT need to buy multimeter to use it once, borrow it from the lab for the half of an hour. <A> I have two ICs on a breadboard (74ls374 D type flipflop and 74ls04 hex inverter)... <S> outputs are both at a logic high, but the flip flop output voltage sits around 3.3V, while the inverter is at 4.5V. <S> There is no load resistor or anything like that. <S> Is ANYTHING connected to the outputs? <S> The TTL high outputlevel, if there is a TTL input being driven,may rise above the (nominal) level, which is about 3.5V. <S> An SN74LS04 inputis a circa 25k ohm pullup resistor, and a 7404 input is a 1k ohm pullupresistor plus one diode drop, from +5V. Odd though it may seem, an unloaded output, when high, is lower voltagethan a loaded output (if the 'load' is a TTL input). <S> The minimum andtypical valuesin <S> the data sheet are for PULLDOWN loads, and do not representan average value for the common situation of an output pin that drives another logic gate of the 74nn or 74LSnn families. <S> The 74LS logic family is a good example for students, because ithas so many strange behaviors. <S> Understanding its input/outputfoibles is a kind of rite of passage
| IC dies come from different parts of the same wafer ... There's nothing in the output circuit that holds the output pinvoltage down when the logic level is HIGH.
|
Transistor heating up problem I have an LM317AHVT power supply with PNP pass transistor. I noticed that the power transistor is heating up to about 64C after a few minutes. I am using an 130x80x30mm heatsink and an 92mm fan to cool the heatsink. The temperature of the heatsink is about 46C.I mounted the transistor using mica insulator and thermal paste. I also made another test, using a bigger heatsink (170x80x35mm), and the temperature of the TIP36C transistor was the same (about 64C) even if I used a bigger heatsink. The temperature of the heatsink was about 40C. I also used the same fan to cool the heatsink and the transistor was mounted using mica insulator and thermal paste.In both cases, the transistor was mounted using an M3 screw.The fan is blowing air like in this image: This is the schematic of my power supply: The input voltage is about 39Vdc (measured on the filter capacitor), the output voltage is about 0.6V (the power supply works in constant current mode, because I want to test the power supply in the worst conditions) and the current throungh transistor is 1A. The current through LM317 is about 0.7A. Is this temperature normal or it should be less? When I used a bigger heatsink, why the temperature remained the same? It is OK to use the transistor without mica insulator and only with thermal paste? Will the transistor be damaged if I will use it at 64°C? <Q> It is probably normal for your inefficient linear supply. <S> Temp rise = <S> Rth × <S> (Vin - Vout) <S> × Iout <S> e.g. if heat sink is <S> 2°C/W and Vin-Vout is (39V-1V) @ 1A that results in 38W × 2° <S> C/W = 76 <S> °C rise. <S> Using a pulsed inductor on the input will store energy to reduce conduction losses. <S> Or you can improve your heatsink thermal resistance ( e.g. CPU cooler) Search for PWM buck reg & LM317 solutions. <A> You transistor is dissipating nearly 40W of power. <S> Question 2 - <S> The temp is the same because only so much heat can effectively pass from the transistor into the heatsink with your configuration. <S> Question 3 - Yes you <S> can operate without the mica insulator - BUT, you must leave the heatsink ungrounded and it will have the power supply's output voltage on it. <S> I would not advise doing that. <A> You've made a mistake somewhere because Q1 shouldn't turn on at all in that circuit you've drawn. <S> \$V_{BE}\$ will always be zero. <S> This is the more conventional way to use a PNP pass transistor with an LM317:
| So in answer to question 1 - yes that temp is probably "normal". If you want to reduce the operating temps in your "worst case" operating condition, I would suggest you use two TIP36 transistors in parallel, giving you twice the thermal contact to the heatsink, doubling your effective heat dissipation..
|
Boost converter, how does diode allow current from inductor to flow towards capacitor? So I watched this video to understand how DC-DC boost converter works. Here's circuit diagram: And I don't get, when switch is off, and polarity in the inductor is such that positive is on the right, negative is on the left. Then in order for diode to conduct, anode should be at higher potential than cathode. In other words, voltage across inductor needs to be higher than voltage across capacitor. Now video said, there's gonna be a spike in voltage in the inductor after switch is off due to collapsing magnetic field to keep current constant, but how big is this spike? And wouldn't charge, and therefore voltage across capacitor be so big, that the spike in inductor will no longer be higher, so diode would not conduct at all? Correct? <Q> The whole system starts when the FET is on, i.e. Vds = 0v in this situation. <S> For a certain period called Duty Cycle the inductor is gonna charge up with the increasing current that flows through it, and like that it will store energy. <S> As the FET is turned off, the current would normally decrease, but, as there´s a inductor it will try to force the current to same direction it was flowing due to Lenz´s Law. <S> As a result of that, the voltage in the inductor will have changed its polarity in order to keep current flowing to the same direction and then the voltage will be greater than the input(9V) voltage. <S> If the circuit is working under Steady State(SS) conditions then the current that was flowing through the inductor will have the same variation. <S> In discharge, as the voltage across the FET is now greater than input voltage, the diode will be successfully biased. <S> The spike you talked about is going to be as big as your duty cycle and your load, because inductor tries to keep the current constant whatever resistance it sees ahead, then causing the voltage to go up. <S> Regarding the duty cycle, in ideal models the appropriate formula is given by: Vo/Vi= 1/(1-D), which states that the greater your Duty Cycle, the greater your output voltage. <A> An inductor works on the following fundamental principle: - \$V = <S> L\frac{di}{dt}\$ <S> Where \$\frac{di}{dt}\$ is the rate of change of current in the inductor <S> If you apply a voltage across an inductor (by grounding one end for instance), current will climb at the rate V/L amps per second and, while doing so, energy will be stored in the magnetic field. <S> When the inductor is disconnected from that ground connection, the stored energy pushes a current out of the inductor in the same direction. <S> That current tries to maintain its value but it can't and <S> so current starts to fall. <S> This means that the rate of change of current (\$\frac{di}{dt}\$) is negative. <S> This generates a negative voltage across the inductor terminals <S> \$V = <S> L\frac{-di}{dt}\$. <S> The input side of the inductor is "tied" to the incoming supply voltage hence, the switched side of the inductor (previously at ground) generates a voltage that is greater than the incoming voltage - <S> this is the voltage reversal seen across the inductor i.e. this is a negative voltage compared to when the inductor was grounded by the transistor. <S> This "greater voltage" rapidly rises (in order to push current out) and when this rapidly rising voltage equals the voltage across the output capacitor (plus one diode drop) it finds a "load" to dump the current into. <S> From this point on the output voltage acquires a level suitable to carry on pushing current into the capacitor until all the magnetic energy previously stored is depleted. <A> Sinking current with 0V on FET when released creates a potential in the opposite direction at the same current unless clamped by the cap or battery and diode, when forward conducting. <S> The current decays at a rate of L/ESR, where ESR is all the loop series resistances including the diode and cap. <S> Consider the FET and diode as a SPDT switch with a ramp up current and ramp down current that raises the Cap voltage. <A> When the FET is ON, current is stored in the inductor as a magnetic field. <S> When the FET is OFF, this magnetic field collapses and induces current back into the inductor winding, in the same direction. <S> It has to go somewhere, so the fast reacting diode forces it to the output side where a filter capacitor smooths it out into a DC voltage. <S> These circuits can be very efficient because the power source is able to use the diode as well, so this circuit boost the voltage higher than the input voltage. <S> That is the simple answer. <S> The math wizards will complain about "but this goes this way", etc. <S> The value of the inductor has much to do with how much boost you can get (as well as FET 'ON' time, or duty cycle), but high inductive values (>1mH) are not as efficient (Q) due to the DC resistance of the inductor. <S> (Q = L/R) <S> You can boost the voltage a thousand times if you like, but the current available will be reduced by the same amount, minus conversion losses. <A> I think you might be misunderstanding what exactly an inductor is. <S> The voltage across an inductor determines the rate of change of current through the inductor. <S> When the FET is on, the input voltage is applied across the inductor, which causes the current to increase until the FET is switched off. <S> When the FET is switched off, the current will flow through the diode into the capacitor. <S> It will flow, because current through an inductor doesn't just stop instantaneously, and that current has to go somewhere. <S> The capacitor terminal will normally (or soon) be at a higher voltage than the source. <S> This difference is applied across the inductor, causing the current to slow down. <S> This is the "voltage spike", but the particular voltage is not chosen by the inductor. <S> It's just the voltage that results from stuffing current into the capacitor. <S> The voltage at the inductor terminal has to be the capacitor terminal voltage plus the diode voltage drop, and realistically a little more due to stray resistances.
| You could say the diode adds the idle flow current to the inductors current, thus raising the voltage.
|
Is a single wrist strap enough against ESD? I bought 2 anti-static gloves and 1 grounded wrist strap to protect the parts against ESD by assembling computers. I was wondering whether a single wrist strap is enough, because I use to touch parts with my right hand too. <Q> Grounding yourself is only part of mitigating ESD. <S> You can still cause damaging discharges while properly grounded when the device you're working on is itself charged , when the tools you use (like a multimeter or screwdriver) are carrying a charge or when your workbench/parts bin is charged. <S> Ideally you'd ground everything that can come into contact with sensitive devices: Yourself <S> Any surfaces where you may lay down your board and/or components <S> Any tools and equipment that may contact the device (soldering irons, electronic test equipment, screwdrivers, etc.) <S> This grounding can be accomplished either through your body (as is done with a dissipative screwdriver, side cutters and the like) or a dedicated grounding wire (e.g. an antistatic mat on your bench). <S> Grounding leads must contain a high value resistor, so that you won't get shocked via your strap when accidentally coming to contact with high voltages. <S> Grounded surfaces should be resistive (e.g. ESD mat, dissipative tray), instead of highly conductive (e.g. steel box, aluminum plate), so that any discharges that do occur will get current limited. <S> Do you really have to go through all this trouble? <S> It depends. <S> I personally don't bother taking all these precautions with hobby electronics or while taking my PC to bits (at my own risk). <S> However, I do take this seriously at the workplace <S> and I think you should do the same. <A> As the question written, "is one wrist strap enough", the answer needs some qualifications. <S> If you are concerned about touching parts with your other hand, then the body conductivity will provide discharge path for the entire body even with one wrist strap. <S> So the answer is "YES". <S> However, this answer assumes that the rest of assembly environment is designed in full accord with ESD protection methods. <S> Check with some articles on ESD protection how to accomplish this. <A> There is really no answer to this question as written. <S> Since ESD protection is really a probability thing, "enough" is meaningless. <S> There are also many variables that affect how much ESD you may be carrying. <S> What you are wearing, what kind of shoe or sock you are wearing, carpet or tile, dry air or humid, dry skin or moist, hairy wrist or bald, how much you are moving around... <S> it all affects the formula Enough also depends on what you are working on. <S> Some devices are relatively insensitive to ESD. <S> Others are so sensitive you need to work under ionized air. <S> However, it doesn't hurt to get into the habit of touching something grounded with the other hand before you unpack a device or touch a circuit board. <A> One grounding point on your body is sufficient to drain any charge on your skin. <S> We are not insulators, our skin is conductive and any static electric charge on your body will equalize over your entire body within microseconds. <S> You can dispense with a ground strap entirely by using ESD conductive shoes and working on a conductive floor or ESD mat that is grounded, and using a grounded ESD mat on your work bench. <S> It is also advisable to wear an ESD smock over your clothing, as your clothing is actually one of the the primary sources of static charge.
| Ideally, you need a ESD mat, and an assembly table with ESD-qualified surface, all properly grounded with leads having 500k+500k "safety" resistors. In general, if your wrist strap is making good skin contact, you are pretty safe to work on general electronics.
|
What is the purpose of the capacitors in this circuit that are in series with this VCO? I am learning about how VCO modules operate and I am rekindling an old interest in electronics in general. For one particular VCO, the manufacturer has provided a circuit diagram. Does anybody know the purpose of the capacitors in series between the tuning voltage source and the mains voltage source? <Q> Your terminology is incorrect but the shunt caps on the supply and tuning voltage is to provide low ESR to noise over many decades of frequency. <S> Often the Self Resonant Freq. <S> or SRF limits the useful RF spectrum and a smaller 1nF is needed to take over when the 10uF becomes inductive. <S> Hence these parts are 5 & 4 decades apart in value to extend the low impedance to ripple for potentially 8 decades in bandwidth. <S> Each source Zo(f) must be known to demonstrate precisely. <S> Superimposing the impedance charts may be more instructive. <S> There are many similar answers on this site if you search; Shunt caps, SRF. <S> It often extremely important to remove all ripple noise V to reduce phase noise on the VCO. <S> However if were used in a PLL where negative feedback requires more bandwidth to track a lower noise XO & phase detector, then the Vtune caps may be smaller. <A> They are decoupling capacitors to keep VCC power rails devoid of high frequency noise. <S> Without it, the frequency of the output wave will fluctuate with the noise in power rails. <A> Noise on either of those will cause the generated frequency from the VCO to be noisy. <S> That is, the frequency will wander up and down in response to the noise. <S> Your generated signal would be frequency modulated by whatever crap is on the power supply or the tuning voltage. <S> You want a clean signal, so you need a clean power supply and a clean tuning voltage.
| The clearest thing to say about those capacitors is that they are there to remove noise on the power supply and the tuning voltage.
|
Fundamental difference between microcontroller and PLC I am a teaching assistant for a sophomore mechanical engineering laboratory course. One of our lab modules is focused on PLCs, and another uses Arduino micro controllers. One of my students asked why they are different and I found myself without a good answer. They both have input and output pins, perform logical operations, and are programmed via regular computers. They can be used to control the same equipment, read the same sensors, and make the same logical decisions. The differences in programming languages and the fact that they look and feel very different seems superficial. Is there a fundamental difference between the two, or is a PLC just a micro-controller designed to work in an industrial environment? How can I explain this difference to a sophomore college student with no background in electronics? A good answer will either answer my question or offer a useful frame challenge; I may be asking the wrong question entirely. <Q> I would describe the difference between PLC and microcontroller as follows. <S> Yes, a PLC is a microcontroller "ruggedized and adapted for the control of manufacturing processes". <S> The difference is that a PLC usually has a user-friendly wrapper already built-in, which provides easy and reliable access to various IO functions, ADCs and DACs, timers, alarms, etc. <S> The programming interface is tailored to industrial control functions, and is not unnecessary overflexible as BASIC or C. <S> If I try to provide a smooth one-line formulation, PLC are microcontrollers with pre-compiled interface to standard I/O functions and control loops. <A> If Engineering NRE time is billed at a $150 rate it is easy to see why a $100 PLC is chosen in a complex network over a $1 PIC uC or even a $25 Uno. <S> The result is reduced software development, no need to isolate EMI issues with shields, free tools for Windows click changes and compile code and interoperability. <S> Saved time and improved reliability in harsh environments with proven specs. <S> The differences are significant to the experienced and unknown to the naive. <S> Any newbie who thinks they can do the same with any Arduino, either hasn't done it before or has a lot to learn (or both). <S> i.e. create an industrial SCADA with many PLC's and verified to every environmental stress in the spec. <S> while conforming to every functional spec and demonstrate fault detection, correction and tolerance and have ease of maintenance. <A> PLCs could be much dumber than microcontrollers , in principle they didn't necessarily have to have memory <S> (although I reckon most of them had), or be Turing complete . <S> They only had to have enough configurable logic gates to replace a bunch of relays . <S> But that also means that PLCs were faster than microcontrollers, for what that's worth, and also more reliable. <S> Simplicity can have its own value . <S> PLCs also have a user interface to set them up, microcontrollers don't. <S> Today, the differences are smaller and fewer. <S> They're largely due to the history of the tech, and to the differences in usage. <S> Relays/ contactors comprise the outputs of a typical PLC, while the outputs of a microcontroller is a bunch of CMOS gates. <S> This is a significant practical difference, as these outputs have very different electrical characteristics. <S> So in the sense of being able to turn outputs on and off based on inputs, there is no fundamental difference between a microcontroller connected to a user interface and a bunch of relays, and a PLC. <S> On the other hand, that's a bit like saying that there is no fundamental difference between a railway train and a motorcycle: After all, both have rotating wheels that let them transport stuff along the surface of the earth, they both have an engine, a driver, and they both consume fuel. <S> Whether or not the differences between them are fundamental depends very much on your definition of the word fundamental. <A> A PLC has a specific set of functions that are designed for simple to medium complexity machine control and they do that well, are rugged, and are Lego like and as such need minimal development effort. <S> A micro is a level below that. <S> It is a basic building block on which you can develop extremely complex control systems which include higher order data analysis and manipulation. <S> However, that comes at a cost of development and portability. <S> Most modern PLCs do indeed have Micro inside them, some even have a whole embedded PC in there, with a wrapper of software that provides the expected functionality. <S> As the technology has developed the functionality of PLCs has also advanced so the edges have become more blurred over time. <S> If you want a house, you can order one and it will arrive on the back of a truck or two and a crane will stick it on your LOT and you are good to go. <S> However, if you want a house with two and a half baths, a cathedral ceiling, and a long list of other fine details, you are going to need the truck to deliver lumber, bricks and mortar etc and have someone built it exactly the way you need it. <S> The same goes for PLCs vs Micros. <S> Further, if you want to go into the portable house building business, you general don't start out by buying someone else's prefabs. <S> The trick is to know when to use which. <A> To me, your "superficial differences" are in fact the fundamental differences between a microcontroller and a PLC. <S> A PLC is a complete, ready to use, appliance. <S> It comes in a nice package, ready to be plugged into AC power and with suitable interfaces and easy connections to sense and control real-world devices. <S> It will have a fairly simple application-specific programming environment - typically "ladder logic". <S> A microcontroller is a programmable integrated circuit. <S> The user must build or provide a power supply and the necessary interfaces to real-world devices, and write a program using a general-purpose programming language like C. <S> The progammer will have to become inimately familiar with the internal workings of the microcontroller to write the program. <S> A microcontroller could be used as the "brains" of a PLC.
| A modern PLC can have a microcontroller for a brain . Historically, the difference is significant. A PLC allows time to be saved with a higher level programming such as Relay Ladder Logic (RLL) or Stage programming .
|
FPGA: intentional delays through manual placement/routing In my FPGA design, I have some input signals that need to be delayed considerably before they reach the first clocked register. There are delay elements near the pins for exactly that purpose, but their maximal delay is still too low. So I want to force the signals on a detour through the FPGA in order to achieve the required delay. I realize this is not "by the book", but maybe others have been in the same situation. Question: Are there some "best practice" rules for this kind of thing? Provided that the required delay can be achieved "on paper" in both ways, which is to be preferred: signals travelling great distances or signals going through combinatorial logic? What kind of logic is best suited for that purpose (LUT, carry chain,...)? If you're an experienced designer and can't think of any difference, that too would be useful information. I'm using an Altera Cyclone V, but the question should be answerable in general. <Q> That can get me granularity in the sub-5ns region with a cheap FPGA. <S> This document from Lattice describes how to do exactly what you are asking (fool the compiler into not optimizing out the buffers). <A> Synchronize the input and delay it with the appropriate amount of flip-flops would be the way I'd implement it. <S> What is your clock frequency and required input delay? <S> Is a certain variance in delay acceptable? <A> As you said: that is not recommended. <S> I am also not sure that it is needed. <S> The main reason to delay an input signal is so that it does not fall within the set-up and hold time of you register. <S> The I/ <S> O pad delay should be more then enough for that. <S> Another way to deal with the problem is to see if you can use the opposite clock edge. <S> It is not ideal but sometimes you have to. <S> I have the feeling that this is another case of "Tell us your problem, not your solution". <S> If you want to add another delay you have to use something which can not be optimised out. <S> You can try to use an adder where the second input can be "programmed". <S> You just never program it with any other value then zero. <S> Something like: <S> reg [x:0] dummy; assign delayed_signal = signal <S> + dummy;always @(posedge <S> clk or negedge reset_n) <S> it (! <S> reset_n) <S> dummy < <S> = 'b0; else if (load_dummy) dummy <= <S> {dummy,dummy_input}; <S> Now you have to make sure that "load_dummy" and "dummy_input" are valid signals which somehow can change but never do. <S> e.g. from a CPU register or from two input ports which are tied off. <S> But I suspect the delay will: Be very small. <S> Not the same for all bits. <S> A better alternative, very accurate but more difficult, is to use an internal high speed clock to 'clock' the signals through. <A> It's quite difficult to stop the PAR 'optimising out' intentional delays. <S> While this suggestion is not 'best practice', it works as long as you have some spare pins. <S> Choose a number of pins on opposite sides of the device, and put their pin numbers into the constraints file. <S> Then run your signal from one to the next I/ <S> O. <S> The PAR will be forced to route the signals across the chip and back. <S> There's no need to actually have the signals driving the external pins, you can keep the outputs tri-state, although a sniff of capacitance on the external pin can add more delay if you do route it through the pin itself. <A> Yes, there is a best practice: <S> Do not delay in combinatorial logic! <S> If you are not using double-data-rate, then choose the appropriate edge of the external clock. <S> In case of double data rate --- or if external clock is plesiochronous to internal logic --- use a PLL to recover and phase-shift the external clock. <S> Then put a dual clock FIFO between the new clock domain and your internal logic. <A> You could use a counter and a buffer? <S> Put your data in a buffer and reset the counter. <S> Use whatever clock and counter value you need to give you the required delay, then use a comparison of the counter output to enable the buffer output so that it spits out the data, or clocks it into wherever it's supposed to go?
| What I've done in a recent design is to multiply up the clock frequency and use a clocked delay.
|
Pull-up or pull-down when building an LED ROM for 74HC logic levels? I want to build a LED ROM (a diode-ROM, but using LEDs). The LEDs will be driven from the outputs of a 74HC138 or similar. They will drop 1.6 - 1.7 V (red). The ROM will have eight words of memory, each word being 12 bits wide (because that's what I need). Each bit will be an LED socket. The presence of an LED indicates either a 0 or a 1 (depending on pull-up/down), and the absence indicates the opposite value. When a word is accessed, the value's LEDs will light. Is this feasible with 74HC levels and driving characteristics, without further components? Should I pull-up or pull-down? or (If there is no difference, I will pull up, as that will light the accessed 1s, which seems more intuitive.) <Q> This idea, though cute, will not work reliably as is. <S> The issue is LEDs have a typical forward voltage of 1.6V With pull up that puts the low level output at 1.6V which is over the max Vil threshold for HC. <S> So pull up is out. <S> With pull down the high level is 3.4V which will be above the max Vih, but is really close, especially when you subtract whatever you need to buffer the signals (See below). <S> An additional transistor detector may be required here. <S> See Spehro's Answer Further, in order to get any decent light out of the LEDs the resistors will need to be small and the TTL will not be able to drive a full row of LEDs at once. <S> You will likely need to add some sort of transistor to the decoder to push that much current. <S> All told, something like this may do it for you. <S> LED present = 1 out. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> A red LED will drop a couple of volts, so you'll likely need something else in there. <S> Other colors require more voltage. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The resistors R3 (and similar ones for the other LEDs) determine the LED current. <S> One or more LEDs (up to more than 10) will turn the transistor on, which pulls the input to the inverter low, so the inverter output goes high when one or more of D1.. <S> D3 is lit. <S> An ordinary 74HC output with 5V Vcc can drive several LEDs with the shown series resistor with little drop (current is about 2-3mA each, which is plenty for a modern LED). <S> If you want to drive more either reduce the LED current by increasing the resistors or use a more complex circuit like Trevor's answer (or add 20-30mA buffers to the output of the AND gates). <A> First off, using an LED will give a voltage drop of about 1.8V to 3V (depending on the LED type, or color). <S> So as a result your output voltages might not reach a compatible high or low logic level. <S> Your concept could only work if you made adjustments for the offset logic levels. <S> One possibility would be to use an analog comparator on each output and set a unique logic level. <S> Try building or simulating the circuit and measure the actual output voltages. <A> There are a couple of things we need to be careful of. <S> The LEDs will have significant volt drop. <S> We need enough current in the LEDs to see them. <S> To get around 1 we have the LEDs pull up and the resistors pull down. <S> Then for the next stage of logic after the ROM we use HCT logic. <S> Regarding 2, apparently HCT logic can drive about 25 mA, so we get about 3 mA per LED for an 8-bit wide ROM. <S> That should be enough to make them visible though not especially bright. <S> In summary, I think this is feasible.
| To use a pull up or down resistor depends on if you want the LED to be on or off for an input of high or low, (your circuit with pull down resistors would give an LED on for a high output).
|
Noise from test pins on signal lines? Are test pins on sensitive signal lines likely to cause problems? For example, if there is a test pin sticking up from the board on a line going to a 20 bit ADC is there a danger of increasing the noise? Has anyone experienced such problems? <Q> The only answer is "it depends". <S> If you use 20bit 100Hz ADC, proper filtering would mitigate any test points. <S> Other than that more details is required. <S> With 20bit i guess you are mostly afraid of the mains <S> 50Hz coupling. <S> But other sources of interference may be there as well. <S> Best advice- do everything best you can. <S> Use good cables, ballanced differential lines, and keep budget for the second layout. <A> For example, if there is a test pin sticking up from the board on a line going to a 20 bit ADC is there a danger of increasing the noise? <A> Here is a 22-bit ADC, after 3 opamps providing Av = 5,000 to make a 1milliVolt input become 5volts into ADC. <S> The key is LowPassFilter right before ADC, with large capacitor to shunt electric fields to GND, and to shunt millivolts of random thermal noise to GND. <S> That LPF is 10Hz, 16Kohm and 1uF. <S> I activated 3 of the 4 Electric Field interferers inside Signal Chain Explorer; one is MCU clock, <S> one is 60Hz sin, one is 60Hz spikes. <S> And I activated the Gargoyles mode, with Interconnects (I/C button) also enabled, which brings 14mm long PCB trace into the connections between each stage. <S> 14mm is approximately the size of the Test Pins you asked about. <S> What is your ENOB? <S> effective number of bits? <S> 15 bits. <S> Not 22. <S> Cure?provide a signal of 10 milliVolts, and get 3+ more bits. <S> Or provide a signal of 100 milliVolts, use gain of only 50X, and expect approx. <S> 22 bits.
| If the test pin is positioned in a place that by-passes the ADC anti-alias filters and the analogue BW of your ADC was high-enough, you could certainly get aliasing of high frequency noise that is picked-up from the test pin (aka high frequency monopole antenna).
|
Accuracy of a multimeter over 10 years period Datasheets of multimeters contain accuracy specifications. One parameter is usually accuracy over period of 1 year. I understand that it means multimeter can be off by the specified value. For example Keithley 2000 for 100mV range: 1 year accuracy = 0.005% (of reading) + 0.0035% (of range) or Siglent SDM2055 for 200mV range: 1 year accuracy = 0.015% (of reading) + 0.004% (of range) But question is what accuracy I have to consider over period of 10 years? Do I have to multiply "1 year accuracy" by 10? Or it is not nearly that easy (and not that bad)? No hobbyist is going to do calibration of his equipment every year. It would be useful to know how does the accuracy shift in longer period of time. <Q> Generally that figure is defined because you are supposed to calibrate your equipment annually. <S> If you don't.. all bets are off. <S> You can not extrapolate from one to the other, plus aging will not be linear. <A> Do I have to multiply "1 year accuracy" by 10? <S> Well if you could use it without a calibration being needed it's not strictly the case of multiplying by ten because it's like compound interest that a bank might charge. <S> So if it drifts +1% per year, over ten years you get \$(1.01)^{10} - 1\$ = 10.46%. <S> Doesn't sound too bad and for tighter tolerances you can certainly approximate to multiplying by ten. <S> But you do need regular calibrations for this type of equipment, else what is the point of using it? <A> The simple legal answer is they owe you this accuracy for a year. <S> It the meter fails this within a year you have a warranty claim. <S> After a year (absent another specification) you are on your own. <S> The extreme engineering approach would be for the manufacturer to require drift specs from every vendor and do an error analysis that supports the claim. <S> You can guess as well as I whether they have done that. <S> After a year they have not made a promise. <S> Maybe there is a drift proportional to time^2 or a higher power so things go to pot shortly after one year. <S> In an extreme theory even frequent calibration will not solve this problem. <S> Practically, shorting the leads together will detect offset errors. <S> It won't help with gain errors. <S> We might measure 1.456 volts on one point and 1.358 on another. <S> Sometimes what we care about is that the first is higher than the second. <S> In practice any time I got that from a meter I would count on the ordering of them, but I wouldn't count on the difference being 0.098 volts. <S> Usually the first is the important fact, not the second. <S> Relative values are much easier than absolute. <S> Otherwise you need to develop the skills to understand what you know and what you don't. <S> In practice a 10 year old meter is very useful, but you can't justify it from the specs. <A> Assume you bought the meter 10 years ago, or calibrated it 10 years ago, then you get a year of measurements within spec, or the meter is broken. <S> After 1 year and 1 day? <S> The Manufacturer makes no claim. <S> If you want to claim a spec and can support it, go ahead. <S> But it's on you. <S> IF you measure the same thing with the same meter for 10 years, without re-calibrating, and then re-calibrate and measure again, then you've got a one-point study of long term drift. <S> Don't forget to include the long term drift of whatever you're measuring. <S> You could look at the data in the uncalibrated interval and draw conclusions about it. <S> But that's your calibration, in the interval, not the manufacturer's calibration. <S> Re-calibrate the meter after 10 years, measurements will be within spec. <S> for one year, again. <S> Drift over the 10 year period isn't an issue. <S> But the maximum allowed, in opposite direction, is the worst possible case. <S> IF you use a meter that was calibrated once, for 10 years, without re-calibrating, measuring various values, then you've got 9 years of data from an uncalibrated meter. <S> It may be better than random numbers. <S> To know how much better, you need to measure references to establish accuracy now, or re-calibrate and repeat prior measurements to characterize repeatability, allowing for source drift. <S> Either way, accuracy in the uncalibrated interval on your shoulders. <S> The specifications quoted are really good. <S> If you expect to realize that performance, you have to maintain calibration. <S> If you want a hobbyist quick check, short the two inputs together. <S> That better be 0.0000 volts, 0.0000 amps and 0.0000 ohms. <S> Beyond that, you need a voltage reference, a current reference and a resistance reference. <S> A low drift resistor is not an unreasonable thing for a lab, but why not just get the meter calibrated, or learn to calibrate it yourself, at that point? <S> Before you start shopping for voltage and current standards that are 2-10 times better than the meter spec. <S> They aren't cheap, and they have calibration requirements themselves!
| If you need absolute, you need to be calibrating often and doing careful error analysis. If you measure with a calibrated meter 10 years ago and measure with a calibrated meter, today, each measurement will be within spec and its LIKELY any difference in the measurements is less than the maximum allowed, in opposite directions.
|
Swapping positive and negative traces of an LVDS oscillator Can I swap the positive and negative traces of an LVDS oscillator (Si series of silicon labs) when connecting to clock pins of an FPGA? If no, what about adding series capacitors and using AC coupling technique? <Q> Yes. <S> So your connections would be: <S> OSC_P -- <S> > <S> FPGA_CLK_N <S> OSC_N -- <S> > <S> FPGA_CLK_P <S> The LVDS driver circuits for _P and _N are identical. <S> So are the LVDS input receiver circuits for _P and _N <S> so both signals see the same load and termination. <S> The FPGA's internal single-ended form of the clock will then be in 180 degrees out of phase to the external LVDS form of the clock. <S> You don't need to AC couple them to do this. <A> As long as the oscillator is only routed to one location, you can swap the pins. <S> As far as the FPGA is concerned, it receives a clock stream that is shifted half a clock cycle from what is presented externally, but that really doesn't matter. <S> Assuming that you are just using the oscillator as the clock source for internal logic, the FPGA just needs a stream of clock edges, it doesn't care if they are shifted. <S> If you are monitoring FPGA activity with respect to the external clock, you will need to keep in mind that everything is shifted by half a clock. <S> If the clock was routed to more than one place, as long as all of them are swapped the same, then there is no issue, since all clocks will still be in phase. <S> I'm not sure what AC coupling has to do with clock polarity. <S> You can always add AC coupling, as long as the FPGA side of the AC coupled clock has something to set the common mode to the correct voltage for the FPGA input threshold. <S> Some FPGA inputs have input bias circuits that do this without external components. <A> The LVDS uses a common current source of 3.5mA switched to either polarity output to produce 350mV into a 100 Ohm load while the other signal is switched to 0V. <S> Both polarities are positive switched currents which give positive switched voltages into fixed R.
| You can swap polarity of the LVDS outputs from your clock oscillator module when connecting them to your FPGA.
|
How can I use my high impedance crystal radio headphones in regular nowadays appliances like mp3 players, cellphones and computers? I have old crystal radio high impedance headphones, and I would like to be able to use them to listen to music on my music player as well as to use them on my PC. What kind of circuit do I have to assemble to make it work with low power DC, if that is even possible? <Q> Have you tried connecting them straight to the headphone port? <S> Depending on the sensitivity of the headphones, they may actually deliver acceptable volume. <S> If not, you'll need something to raise the voltage. <S> You can do this in so many different ways. <S> One method is to use something with a pro audio level line output - like a mixer. <S> A balanced pro audio line out delivers a signal up to 60Vp-p which should be enough for the headphones. <S> The turns ratio of the transformer depends on the sensitivity of the headphones. <S> You want something with a low impedance primary (8 to 32 ohms) and a high impedance secondary (several kohms or tens of kohms). <S> Normally, good quality audio transformers are expensive. <S> But considering the fidelity of crystal headphones won't be very high, you can just use a cheap transformer. <S> A third method is to connect the headphones to the output of a normal audio power amplifier. <S> You may be able to connect them to the headphone port, but certainly they will work on the speaker terminals. <S> The more powerful the amplifier the better, because you need a decent voltage. <S> You're not actually delivering any significant power to the headphones, but you do need the voltage that a high power amplifier delivers. <S> The fourth method is to build your own amplifier around an opamp (5532 or TL072 are good choices). <S> You will need to use a suitable power supply, probably +/- <S> 12V or more. <S> A PC power supply isn't a good choice because they're very low quality and noisy. <S> You'll probably need to build your own supply. <S> You might be able to use two or three 9V batteries. <S> The gain required of the amplifier depends on how high the signal level from your audio devices is, and the sensitivity of the headphones. <S> You might use a pot to adjust the gain, as a starting point I'd suggest you'd need an amplification factor of around 6 times (36dB gain). <A> They ought to work with no electrical modification (amplifiers or what have you.) <S> Attach them to a standard headphone plug and try it out. <S> You should be able to rewire them for stereo pretty easily. <S> Don't expect any kind of overwhelming sound quality. <S> They were built to the requirements of the equipment in use way back when. <S> Working with almost no power was a must. <S> Hifi audio wasn't. <A> Considerations: <S> I believe this is a copy of another question, or perhaps - the answer to another question also answers this one. <S> See the following question answered by @wbeaty: stack exchange Answer: <S> Essentially @wbeaty suggests using a transformer in series with the headphone to alter the voltage and current to match newer devices. <S> The idea here is that just as a transformer can be used to transform voltage and current, it can be used to "emulate" impedance such that a circuit see's a different impedance than actually is present (note this is done in transmission and distribution as well). <S> Calculations: <S> You would look into the impedance of your current head phones and then compare it to that required by an 8mm headphone jack and size the transformer accordingly. <S> $$ Zs = <S> Zp <S> * ({{N2} \over {N1}})^2 <S> $$$$ Zp = {{Zs} \over ({{N2} \over {N1}})^2 <S> } $$ <S> Where N2 and N1 is the secondary and primary transformer winding turns. <S> Zs and Zp are the secondary and primary impedances. <S> However you choose to pick your transformer and primary or secondary, you want to make sure that the side connected to the new jacks is the smaller impedance side. <A> Keep your old headphones .They are valuable now .Using a step up transformer for each ear <S> is fine ,others have stated this .Your old phones are much much more efficient than modern phones .When <S> acoustic transducers are designed there are compromises between efficiency and bandwidth. <S> New phones are supposed to do the full audio range .These old phones are designed for clear speech when receiving noisy signals .I <S> found in 1975 that 1 volt of audio ballparked on an Analog meter from a simple ZN414 circuit with <S> a basic BC107 single ended amp provided more than enough volume for my 13 year old ears. <S> It was said then that 1mW was enough to drive the old phones .There <S> were many simple single ended transistor circuits running on low battery voltages that would drive such phones but would never drive a speaker .If <S> your phones are anything like what I remember in junk boxes in the 1970s <S> then a direct connection should work .
| Another method is to use a small audio transformer.
|
What capacitor to get for relay coil? I refer to the diagram from this thread: What is the use of the capacitor in this relay circuit? , which is also shown below. If I were to use a capacitor for my relay coil (12v latching type) which already has a flyback diode installed, what voltage rating , capacitance and type (e.g. electrolytic, ceramic) should I be looking at?Also, if I were to use multiple capacitors for my set-up, would combining different types of capacitors be ok (e.g. electrolytic, ceramic), or not recommended? <Q> Paralleling the coil with a capacitor can work the driver quite a bit harder (possibly causing it to fail) and may cause a brief dip in the 24V supply- which could cause glitches. <S> Chances are a reasonable value would be some nF and best served by a ceramic capacitor of adequate voltage rating. <A> I strongly suggest you to don't use capacitor in this system, it's not AC <S> and you don't need to compensate the coil reactive current. <S> Contrarily it creates problems <S> , it creates a peak of current which may generate undesired spikes in the power line, it stress the driver with peak current, it slows the rise of voltage in reflecting to a weaker connection during the contacts approach. <S> No one is using this condensers in DC systems <A> Most people don't use one. <S> the Diode is going to catch most of the energy when the relay switches off, so the capacitor is only needed for the short period before the diode starts conducting, if that's a problem, use a slower switch. <A> The capacitor is not needed. <S> The diode may also not be needed even if the relay did not have one. <S> The drivers I've used in the past had the diode built in. <S> See ULN5801 etc.
| If the driver is relatively slow or the current is limited it may be useful to reduce EMI from the coil, however the contacts usually dominate the noise and in any case the driver circuit will likely determine how big you can safely make the capacitor.
|
Using a capacitor instead of a coil for wireless charging? This might sound like a stupid question, but I'm kinda interested in a detailed explanation. So while inductive charging is rather ubiquitous, things like electric toothbrushes and Qi chargers come to mind, as well as to a lesser extent, wireless charging pads for cars, the are all inductive charging systems, basically forming half of a transformer each (the charger is one half, the charged device the other half). But would it be technically possible to make such a charging setup, using a large capacitor? I.e. having an AC source connected to a large plate, forming one half of a large capacitor. The charged device would then form the other half of the capacitor. I understand the distances would have to be rather small, but in cases like Qi chargers, the're right on top of each other anyway. Also, the dielectric properties of the materials between the two conductive plates, would obviously play a role. But from a purely engineering perspective, why is it a bad idea? One problem I can come up with, is using very high frequencies. The charger side of the capacitor would essentially be an antenna, with the sizes of a smart phone, that would mean frequencies in the 500MHz-3GHz range. I assume it'd be simply too complicated to use a signal generator at these frequencies to send sizeable amounts of power? <Q> First point: A transformer has a built-in return path, so you have a complete circuit. <S> That is to say, it has two wires on each side, so current can flow out through one and back through the other. <S> To make an equivalent device with capacitors, it would be need to have two of them, one for out and one for return. <S> Next lets put some rough numbers on. <S> For a phone, you have an area of about 5000mm^2 for each capacitor. <S> If you put the capacitor plates near the surface, then you might get a gap of 1mm and the plastic case could be engineered with a dielectric constant of say 10. <S> That works out about 0.4nF per capacitor. <S> Now lets think about transferring power through that. <S> At 1kHz the impedance would be 400Mohm. <S> At 1MHz, it would be 400kohm, and 1GHz it would be 400ohm. <S> That sort of high frequency causes lots of headaches around avoiding interference with other devices. <S> And at any real power, not microwaving the user. <S> And even if you go all the way up to 1GHz, then you still have to deal with a relatively large impedance. <S> To get a watt flowing through the capacitors will need 30V. <S> So the engineering problems are probably not insurmountable, but it is still much easier (and therefore cheaper) to use an inductive method. <A> But would it be technically possible to make such a charging setup, using a large capacitor? <S> (but that's what makes it unattractive). <S> If you had two parallel plate capacitors (for forward and return current) of 1" cross sectional area each and a spacing of 1 mm between phone and charger you would have a total series circuit capacitance of about 2.7 pF <S> (yes pico farads). <S> If you series tuned this capacitor with a 10 uH inductor you would get a resonant frequency of 30.6 MHz and pretty much a low impedance path this presents. <S> This means you can transfer power at relatively straightforward frequencies without a sizable "plate" of capacitance. <S> You don't need large capacitance. <S> The "oscillator" frequency could rely on the tuning of the L and C so if the capacitance was say 1.35 pF (twice the gap), the oscillator would naturally swing to a frequency of 43.3 MHz to keep tune. <S> If the phone is not present this can be easily detected so there wouldn't necessarily be an interference problem. <S> You could design the oscillator to not work beyond a limit frequency (say) <S> 50 MHz <S> and it would only swing back into action when the phone was stowed. <S> The difficulty is cost - making <S> a 10 uH inductor that had low loss and a resonant frequency above 100 MHz isn't going to be as cheap as any old coil of wire running at 13 MHz coupling a magnetic field. <S> So, I would say mass-market cost drives down this idea and not technical feasibility. <A> Here is an experiment I did comparing coils to plates for lighting an LED across a ~1mm gap... <S> https://wp.josh.com/2015/07/11/the-other-way-to-do-wireless-power-capacitive-power-transmision-proof-of-concept/ <S> There have been cases where I've picked capacitive coupling over inductive coupling to charge batteries when the the geometry of the product happened to offer large surfaces that were good places to put the plates. <S> A layer of foil or sprayed on conductive layer can be much cheaper and easier to manufacture than a coil. <S> Here is a commercial module able to transmit multiple watts across a gap capacities with a small plate running at high frequency... https://www.murata.com/en-us/about/newsroom/techmag/metamorphosis16/productsmarket/wireless
| Yes and almost commercially cheaper than inductive charging It is not only possible, but practical for some applications.
|
How can I keep a momentary button low long enough for microcontroller to read it? I'm using an ESP-12F to fire off mqtt messages when a button is pressed. The circuit works as expected; the physical button resets the device which boots, runs the code and deep sleeps indefinitely. I want to add more than one button (or external trigger) but I want to know which one woke the device up. I attached multiple momentary buttons to the circuit and they all do the job of reseting the device but I was unable to capture which button was pressed. The yellow line is my button press (drives reset low), it takes about 150ms. The blue is a simple digitalWrite in the setup method of a blank program. It appears the ESP12 takes about 250ms to boot up into setup where I could read the pin but by then the button has returned to it's original state. Is there an easy way to extend the low so it can be read by the IC? This is also an empty program, once I add libraries etc it adds another 50-100ms just to get to setup so I would probably have to extend the button state to say 400ms to be safe. Edit: So an issue with this design is that if I hold the button low to try and capture it since it's connect to reset it doesn't actually reset. I need to "capture" the low and then release but then read it after startup. Edit: Blue is connected to an LED on a GPIO. I'm using this to basically see when the program is in setup() by switching this high. The yellow line is the reset pin, the initial low resets the device and then when I release the button it returns to high via a pullup. <Q> The EXT_RSTB input on the Expressif 8266EX is level-sensitive (low active) so stretching the push-button press will not help- <S> the processor only boots once the EXT_RSTB signal has been removed. <S> You can use a couple one-shots to stretch the pushbutton presses (perhaps a 74HC123 ) <S> but you'll also have to combine the two switches to create one reset signal. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> To answer your specific question, one way to do it would be to add a SR latch between the button and the microcontroller, so that the button sets the latch and then the microcontroller can reset the latch when it's ready. <S> Christoph is right though, it would be odd if you could write outputs before the boot finishes but not read inputs... <A> If you're willing to 'reduce' the sleep level (to light sleep maybe), then you can use your external interrupts to wake the device, save the cause to EEPROM in your handler and then reset the ESP with ESP.restart() .
| If you pull the inputs low, you can use an AND gate (half a 74HC00, for example).
|
Is there an electrical component for something between a button and force sensor? I'm currently involved in a college team design project and we're creating a "dance" floor that can be switched on and off by stepping on a plexiglass tile. I'm looking for the best way to gather an input from the floor button. We tried using a floor switch which is built to take the weight, but the travel of the button worried me, as more moving parts means more things can go wrong. Our prototype . I tried doing something simpler by using 4 regular small buttons . This worked somewhat well but I don't think it would take weight very well. I looked into other solutions - a flat force sensor (with no moving parts) seemed like a great solution - but they are relatively expensive compared to buttons. They also are analog. Although we can use an MCU to convert it into a simple "on/off" I'd prefer to avoid that. I also looked at piezoelectric components. These seemed like the best bet, but again are analog and can produce a noisy signal. Is there something like a no-travel, flat, resilient and cheap switch? Is there something like a force sensor but as a simple pushbutton digital switch? Apologies if this is the wrong forum but I'm feeling a bit lost searching through the endless tables of DigiKey. <Q> A solution that involves no mechanical travel would be one sided copper sheets or plates that are part of a 'mass' detection setup. <S> An oscillator puts a frequency of 100 KHZ to 1 MHZ on each plate. <S> When stepped on the bodies 'mass' loads down the RF such that the change can be detected. <S> Even more sensitive is one that picks up on the phase-shift caused by a body on the plate, compared to a second signal that is not phase shifted. <S> In both cases a LPF and a diode to convert AC to DC would be needed, along with pots to adjust sensitivity. <S> Especially the phase shift type, as it may trigger just by getting close to the plate. <S> I assume these pads will form a 3 x 3 or 4 x 4 matrix, so you only need row and column drivers and sensors. <S> That makes either 9 or 16 duplicate circuits. <S> You may be able to cheat if you can get a mux like a 74C150 or CD4051 to scan for a changed input, like a keyboard scanner. <A> I agree that trying to do this with a button is very difficult. <S> It is not only a question of travel, but also of force. <S> If you attempt to make a small button travel 0.5mm beyond its natural stop, and you have 100kg of force, the button will fail. <S> People can stomp very hard. <S> I think the solution is to have the button PCB on a spring loaded platform, or maybe a high-quality closed cell foam pad. <S> The idea is that the tile's motion is maybe 3mm, and the button throw is maybe 1mm. <S> The extra 2mm of motion is absorbed by the foam. <S> Hope that makes sense. <S> If you use plexiglass instead of a tile, the deflection of the plastic may be enough travel to push the button. <S> Hope that makes sense. <A> I think buttons are still your best solution. <S> To prevent them from being damaged I would add foam underneath the buttons. <S> Top : <S> Tile. <S> Next: foam plate ~5mm. <S> (or less, you need about the travel distance of the button)Next: wooden plate with holes. <S> You can use very small cheap buttons and use e.g. four per tile. <A> The actual button doesn't need to take the weight. <S> Maybe a couple of millimeters. <S> The cap should contact the base plate just after the button is pressed. <S> You can use springs to keep it up and stabilizing bars to make it not "wobbly" <A> detect the motion of each square using a microswitch <S> you can remove the actuator arm if needed <S> a microswitch has a large amount of travel after the contact closes, so it is not susceptible to being "mashed" easily <S> you still have to limit the motion of the floor somehow though each floor panel could be like an upside-down open-top box resting on springy material like thick rubber
| In each hole place a small button with foam under the button. The problem is that there must be an upper limit to the force applied to the button, otherwise it may fail prematurely. For a mechanical button you will need some amount of travel but it shouldn't be much. The tile would have to be on a rigid structure so that it has very limited range of motion.
|
Application of Complex Numbers I was just wondering how complex numbers can be applied in electrical engineering and why we use complex numbers over regular, real numbers for this application (e.g what capabilities does the complex number have that real numbers do not in electrical engineering)? I have done some research concerning impedances and understand how they are written in complex form, however I am still confused why complex numbers are necessary in this field over regular numbers. <Q> If you consider Real Power and Imaginary Power <S> we are talking about resistive power and reactive power with energy stored in inductors, and capacitors. <S> The vector sum of both is called "apparent power" Even in mechanical systems there are complex reciprocal devices with stored energy in flywheels or springs. <S> Inductors and Capacitors are similar in that they can store energy , in math called imaginary value. <S> But when an inductor opens current and arcs, it turns in to real energy similar to shorting out a capacitor into some resistance. <S> Although this is a crude example like putting a crowbar brake across a flywheel. <A> If you don’t own a copy of the volumes of Feynman’s Lectures on Physics , I would highly recommend one. <S> He brilliantly introduces complex numbers in Vol. 1, “22-5 Complex Numbers” . <S> But in the next section, “22-6 Imaginary Exponents” , he makes the following famous assertion: <S> We summarize with this, the most remarkable formula in mathematics: \begin{equation}\label{Eq: <S> I:22:9}e^{i\theta}=\cos\theta+i\sin\theta.\end{equation} <S> This is our jewel. <S> There’s too much to cover here, <S> but I refer you to this lecture where he applies the above formula, with regard to AC circuits: Vol 2. <S> 22 - AC Circuits <S> An excerpt: We have already discussed some of the properties of electrical circuits in Chapters 23 and 25 of Vol. <S> I. <S> Now we will cover some of the same material again, but in greater detail. <S> Again we are going to deal only with linear systems and with voltages and currents which all vary sinusoidally; we can then represent all voltages and currents by complex numbers, using the exponential notation described in Chapter 23 of Vol. <S> I. Thus a time-varying voltage V(t) will be written <S> \begin{equation}\label{Eq: <S> II:22:1}V(t)=\hat{V}e^{i\omega t},\end{equation} where $$\hat{V}$$ represents a complex number that is independent of t . <S> It is, of course, understood that the actual time-varying voltage V(t) is given by the real part of the complex function on the right-hand side of the equation. <S> Similarly, all of our other time-varying quantities will be taken to vary sinusoidally at the same frequency ω. <S> So we write \begin{equation}\begin{aligned}I&=\hat{I}\,e^{i\omega t}\quad(\text{current}),\\[3pt]\xi&=\hat{\xi}\,e^{i\omega <S> t}\quad(\text{emf}),\\[3pt]E&=\hat{E}\,e^{i\omega t}\quad(\text{electric <S> field}),\end{aligned}\label{Eq:II:22:2}\end{equation} and so on. <S> Most of the time we will write our equations in terms of V, I, ξ, ... (instead of in terms of V̂, Î, ξ̂, ...) remembering, though, that the time variations are as given in (22.2). <S> In our earlier discussion of circuits we assumed that such things as inductances, capacitances, and resistances were familiar to you. <S> We want now to look in a little more detail at what is meant by these idealized circuit elements. <S> We begin with the inductance. <S> Note: don’t treat this as an answer, but as supplemental reference <A> For one thing it makes the math a lot easier. <S> For example, think about solving differential equations. <S> It's much simpler to use the Laplace Transform and solve the differential equation, rather than use classical techniques. <S> On the same subject, it gives another perspective to the same problem from a frequency domain point of view. <S> There are also tools like Bode plots, which easily give quick approximations to how a system behaves in the frequency domain. <A> If you are doing time domain analysis, everything is expressed in real numbers - voltages, currents, resistances, because these are always simple instantaneous values.
| When you do frequency domain analysis , that's when complex numbers come in, because quantities like voltages, currents, and impedances have both a magnitude and a phase; expressing such quantities as complex numbers helps when performing calculations.
|
How stable are temperature controlled crystal oscillators? This question is not as obvious as it might seem. Consider this, concerning Rubidium clocks : All commercial rubidium frequency standards operate by disciplining a crystal oscillator to the rubidium hyperfine transition... So at a fixed temperature, and over a few seconds (say 10 seconds), is a regular crystal oscillator stable to part per billion accuracy? <Q> There are several types of 'regular crystal oscillator'. <S> SC cut is quieter than the more pullable and cheaper AT cut, overtone operation, even at 10MHz, is quieter than fundamental, which has more pulling range. <S> If you are building a rubidium stabilised clock, then you'd start with an ovened overtone SC crystal. <S> This will be quieter at most frequency offsets than the rubidium or caesium reference. <S> It's only when you get down to mHz offsets, or stability over minutes of operation, that the rubidium reference becomes quiet enough to be worth correcting the crystal. <S> This timescale implies that a good crystal disciplined from GPS can be every bit as quiet as a rubidium source. <A> Your question is addressed by Allan variance. <S> Yes, a decent crystal oscillator is quite stable over a time frame of a few seconds, but not as stable as one disciplined by a rubidium cell. <S> The long-term stability of a quartz oscillator suffers from aging. <A> That's a surprisingly involved question and depends on what kind of oscillator you operate how, and what model you apply to assess the stability. <S> What you probably want to read up on is "Allan Variance", which describes the distribution of phase error (and a frequency error is a linear-in-time phase error) when observing one clock with another clock. <S> Whether or not you interpret random phase fluctuations as frequency error is up to your oscillator model! <S> The practical problem here really is finding a clock that's significantly better within a 10 s observational window. <S> Experience, however, tells us that practical communication systems that would require such a ppb stability for their receivers "waste" a lot of channel capacity for periodic synchronization. <S> That's often more of an result of accomodating changing channels (especially in wireless mobile comms), but if you think about fibreoptics, which do have billions of symbols per second, you'll find that extensive clock recovery is done all the time, taking away bandwidth for actual payload data. <S> That points out that even for datacenter-grade electronics, you can't just plug in an oscillator and hope it runs stable enough for seconds after you initially estimated frequency. <S> So, I'd argue, that even without looking at Allan Deviation in oscillator datasheet, the answer is "no, PPB stabilities are the domain of atomic clocks, very expensive oven controlled oscillators, or GPS-disciplined oscillators". <A> If by "regular" you mean something from Digi-Key for under $5, then no. <S> 1 ppb works out to +/-0.01 <S> Hz at 10 MHz. <S> A $2 part will move more than that in one second, let alone 10. <S> 0.5 ppb = <S> $760 <S> 1.5 ppb = <S> $119 <S> 1.0 ppm = <S> $13 - temperature compensated 10 ppm = $1.27 - normal
| From one to ten seconds, a good quartz oscillator can be stable to about one part in 10^11:
|
Heat sink on the bottom or on the side? I am considering a thermal design and due to space requirements I was considering different orientation of the heat sink, namely putting it bottom down, obviously, and puttin it on the side as in figures provided. I've seen a similar approach with water cooling but I am wondering if this would work as well and if the cooling effect is the same? Anyone had any experience with this? NB: I am using forced air cooling. <Q> Hot air rises <S> so you get better airflow over the fins if they are arranged vertically, <S> If the airflow is forced you will get better airflow if the fins are aligned with the direction of movement of the air. <S> Due to the thermal conductivity of the heatsink there will be very little difference in the vertical or horizontal positioning of components relative to the heatsink position. <A> When it comes to orientation of a heat sink, you need to chose whichever method gives you the best airflow over the heat sink. <S> Heat-sinks work, primarily 1 , by transferring heat through a large surface area to the ambient air in contact with that surface. <S> If the air can not move, the air quickly heats up and heat transfer from the metal drops off. <S> As such, you want the air to move, or be driven, through and away from the heat-sink as it heats up. <S> Hopefully to be replaced with fresh cooler air. <S> Hot air also rises. <S> As such, if it is not a forced air system, you need to align the fins of the heat-sink <S> so air can rise up between the fins as easily as possible 2 , so a flow can be established. <S> Of your images, perhaps you can see that in the first one, hot air will collect near the top, so efficiency is reduced <S> The second image is actually not that much better since air coming out of the heatsink is obstructing air going in. <S> Your ultimate is the third orientation you have not shown... <S> If this is a forced air system, again, you need to arrange the fins so the air is blown between them in the same manner. <S> Orientation then becomes "orientation with respect to the primary air-flow". <S> Also. <S> I have to mention, you need to exhaust that hot air somewhere. <S> Just circulating it around inside a box only delays over-heating. <S> You should also read my other answer here . <S> 1 <S> The other transfer method is of course radiant heat. <S> 2 <S> Some turbulence is actually a good thing. <S> Air that passes through that does not make contact with the surface performs little cooling. <S> As such : <A> I concur with all of Trevor’s excellent answers and would like to add that compressed air <S> and it’s <S> velocity under fan pressure are key to air conduction but still poor compared to water. <S> Air speed at surface is critical and not just CFM. <S> Velocity is improved with surface eddy currents and vortexes while heat pipes stretch the emiiter area better than fins resulting raising the flow impedance or improving heat conductance. <S> Air velocity can reach a saturation limit but is worth find what that is. <S> It also aids in exceeding the striction level of dry dust which lowers flow from accumulation. <S> This is why high vacuum pressure flow is preferred over push flow but demand a different fan load profile design for flow and pressure. <S> eg compare a high RPM vacuum vane air pumps to a squirrel cage blower fan. <S> Or test with your vacuum cleaner exhaust vs intake. <S> (. <S> If you have an Electrolux)
| An orientation that puts the cooling fins vertical will give a better cooling effect if the airflow is not forced (by a fan).
|
Why does it take over 4 seconds to eliminate audio popping in this circuit? I made a simple audio mixer circuit with its output going to headphones. Right now I'm only testing with a single audio signal at the input, which is music from a smartphone. I hear a pop sound after I apply power (2 x AA) to the circuit, unless power has already been applied for at least approximately five seconds. Even if I leave power applied for a minute, and then I disconnect and reconnect power, there is a pop sound. What about this circuit would take five seconds to stabilize? VCC stabilized nearly instantly when I measured on the scope. I'm using a dual op amp IC, the LM4808 ( http://www.ti.com/lit/ds/symlink/lm4808.pdf ). There are four 470 uF decoupling caps in parallel at the output (for a desired cutoff), and a 1k resistor to Ground that I included as a discharge resistor, but it doesn't seem to have any effect. <Q> Here's what the data sheet says about Cb (the 10 uF capacitor in your circuit): - Bypass Capacitor Value <S> Besides minimizing the input capacitor size, careful consideration should be paid to the value of the bypass capacitor, CB. <S> Since CB determines how fast the LM4808 settles to quiescent operation, its value is critical when minimizing turn-on pops. <S> The slower the LM4808's outputs ramp to their quiescent DC voltage (nominally 1/2 VDD), the smaller the turn-on pop. <S> Choosing CB equal to 1.0μF or larger, will minimize turn-on pops. <S> As discussed above, choosing Ci <S> no larger than necessary for the desired bandwidth <S> helps minimize clicks and pops. <S> So I would definitely experiment with this value and initially try 1 uF. Also <S> , it makes little sense to use both amplifiers (not op-amps BTW) of this package - attach your headphone decoupling capacitors to the first stage. <S> Cascading two stages might make the popping problem worse. <S> You should also note that having nearly 2000 uF coupling your headphones (32 ohms or thereabouts) has a high pas cut-off of only 2.5 Hz and this is too low for audio. <S> Make it more like 220 uF <S> and you might find the pop is reduced. <A> The RC time constant of your output stage is \$\tau=4 <S> *470\mu <S> F <S> * 1k <S> = 1.88s\$, <S> and it takes 5 time constants for the voltage to be 99% of the steady state voltage. <S> Basically, the pop happens because it takes a couple of time constants for the output capacitor to charge up to the DC voltage of the output stage, such that when you reapply power there is no (noticeable) sharp transition. <A> Shouldn't the discharge resistor prevent that problem? <S> No. <S> there is no popping when I wait at least five seconds after applying power, and then connect headphones to the amplifier output. <S> That confirms what is happening. <S> The output capacitors are charging up through the (unknown reference designator) resistor. <S> When power is applied, there is zero charge on the output caps. <S> The opamp output snaps up to Vcc/2 volts, and now the caps start to charge up to this value through the resistor. <S> If you put a scope across the resistor, you will see a voltage spike with an exponential decay. <S> Decreasing the cap array value will decrease the amplitude of the pop, but not eliminate it. <S> But if you want flat response down to x Hz, the output stage corner frequency should be x/2 Hz for a 1 dB voltage drop and tolerable phase shift, and x/10 Hz for "clean" audio. <S> High power audio amplifiers have a speaker disconnect relay driven by a time delay circuit to prevent potentially speaker-damaging pops on power up. <S> While those amps usually have a bipolar power system that eliminates the output coupling capacitor, there is no guarantee that the two supplies will come up perfectly symmetrically, causing an output pop for a different reason.
| Connecting the phones shortens the decay time, but now some of the charging current is going through the earphone coil, causing the diaphragm to jump, producing an audible pop. The discharge resistor is part of the problem.
|
Wide Voltage Automotive Status To Microcontroller Input I need to monitor the status of a wire in an automotive application that can have 10-40VDC on it at any time. What is a good approach to limit the voltage to a microcontroller input across such a wide range? I plan to use a 10nF cap and TVS for ESD and other transients but I'm not sure the best approach to limit the expected voltage. If I can reliably detect 10V, 40V may burn up the input resistors (or resistor and zener?) or I can limit for 40V and not sense 10V. The signal is a simple on off status line that I'm tapping into. It won't be carrying any data or change state very often. I will be sampling it occasionally in the micro to determine if it's high (1V to 40V) or low (GND to under 1V). Thanks. Edited to add that my microcontroller will be running at 3.3V and a logic low needs to be under 1V. Any solution I use must include a low value cap between the input and GND and either diode clamps to Vdd and GND or a Transil. <Q> Using a resistor and Zener diode will work fine. <S> The simulation below shows what you get with a 4.7V zener. <S> You can use a very large resistor, because the microcontroller I/ <S> O pin draws virtually no current. <S> With a 100kOhm resistor and approximately 5V output voltage, the power dissipation is (40-5)^2 / (100,000) = <S> 12mW, well within what the resistor can handle. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> I'd be inclined to use something simple and dirty like this simulate this circuit – <S> Schematic created using CircuitLab <S> Vary the value of R3, or omit it, depending on what you want the lower threshold to be. <S> Note that base emitter junctions are pretty tough, and tend to fail short circuit when they do. <S> If it dies, it will save the protected circuit, and can be easily replaced. <S> D1 is to protect from negative voltage. <S> Car electrics are supposed to be able to tolerate the reverse connection of a jump <S> start battery without damage, as well as 160v load dumps and 24v truck battery jump starts, pretty harsh! <A> Selvek's answer should work. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> In fact, most ICs already include such a diode internally so if you get the resistor right you don't even need to add a diode. <S> One thing you need to be careful of when you use this method is that the minimum power draw of your circuit must always be greater than the current through D1 at 40v. <S> In this case, the current is 0.7ma. <S> R2 in this case creates a minimum load of 1ma, which will protect the supply from being raised over 5v.
| I would add that another simple alternative to the zener is to clamp the voltage to VDD using a diode.
|
Techniques for building 'Bed of Nails' testing board When testing a complete circuit board a 'Bed of Nails' can significantly reduce time and errors. What are some techniques for making a DIY Bed of Nails ? Specifically, looking for what kinds of materials work best, how to trace the board properly to make the holes in the right place, in other words, it's the little things that experienced people know but for a beginner would be a lifesaver. <Q> Having the board artwork helps, but if you don't, you can use a sacrificial blank PCB as your template. <S> Fix the blank PCB <S> the appropriate way up, usually bottom side up, to your bed of nails substrate material. <S> The latter needs to be thick enough to provide the mechanical stability. <S> You can even use two layers for the bed of nails substrate, later separated by a gap, and mechanically joined together by the pogo-pins. <S> You can use good double sided tape to securely adhere the blank board to the substrates so they do not move relative to one another while you drill. <S> Then, using a drill press, drill holes wherever you need a test pin, through the PCB and on through the substrates. <S> Use the appropriate sized drill bit for the pogo-pins. <S> These pins on the bed of nails should be conical at the tips and longer than the pogo-pins so the board lines up BEFORE it makes contact with the test pins. <A> I believe pictures are worth a thousand words (?). <S> I documented my DIY Bed-Of-Nails fixture here: https://piconomix.com/creating-a-good-programming-test-jig-is-not-that-hard/ <S> In short, I create a base PCB with spring-loaded Pogo Pins and an upper deck (mezzanine) PCB with holes <S> that the test PCB locates against. <S> HEX Spacers are used to connect the two levels. <S> WARNING! <S> Beware of tolerances. <S> If your test pad is <S> small (e.g. 1mm diameter) <S> and one of the pogo pins is off-center (e.g. >0.5mm), then it will not make contact, or worse be intermittent and that will ruin your day. <A> A more DIY-friendly option that I've recently discovered is a "plug of nails" interface ( example ). <S> One end of the cable has pogo pins, alignment pins, and tabs to hold the connector in place. <S> The other end of the cable has a standard header that you can connect to your test fixture. <S> The datasheet for that particular part shows the pattern of holes and contacts that you'd need to add to your circuit board. <S> This sort of solution would require you to route test signals to a central location instead of being able to place them anywhere on the board, but would eliminate the difficulty/hassle associated with the physical construction of a traditional bed-of-nails test rig. <S> For boards with a relatively small number of test points, I've found this to be worth the trade-off. <A> I have designed a bed of nails board a few years ago. <S> I've used SHE-100 <S> pogo pins with receptacles . <S> The receptacles were mounted into a 12mm thick plate made out of Delrin plastic. <S> Electrical connections were done with discrete wires point-to-point. <S> These design choices for the bed of nails allowed several useful properties. <S> The holes for the pogo pin sockets had a high aspect ratio, so the pogo pins aligned themselves well in the vertical. <S> The resulting bed of nails board was stiff. <S> I could recover and reuse the pogo pins and their sockets easily. <S> edit To put my DUT board into a perspective, here are a few statistics. <S> number of test points: approx 50 size: <S> 100mm x 40mm nature of the beast: 40W power converter, plus <S> microcontroller small production quantities: 3k units a year <S> edit <S> The design of the test fixture shown in this blog post is similar to what I've done, although there is a major difference: he made a PCB to connect the pogo pins, while I've used point-to-point wiring. <A> Take a plate of material that's stable enough <S> e.g. acrylic glass could work and get some incurcuit test pins. <S> Define the testpoints on your layout and get a 1:1 printout. <S> Stick the printout on your plate and drill the holes accordingly. <S> Probably a bigger challenge is then to fix the plate to your PCB so the pins end up in the right place. <A> Make sure the base is strong, thick, and stiff enough for the pins you are using. <S> Avoid FR4 <S> , it's strong but can be a problem for repairs.
| You should also drill alignment holes for larger pins in the bed of nails that will mate with whatever mounting holes you have on the PCB.
|
What is a pin header with a wall on one side called? What do we call a Pin header that has wall of one side to ensure you don't connect it wrong(Male side) and on female side there is a slot opening for the wall so it sits and fits perfectly?Sorry I don't have a picture, I have just seen it but don't know what the name is hence can't find to buy. The one in this pic for SPI and RS232. <Q> A latching pin header is what I think you refer to: - <A> Curiously enough, Digikey, at least, categorizes this type of header as an "unshrouded header": <S> But within this category, they distinguish the feature you're asking about as "shrouded - 1 wall": <S> I happen to have been searching for a similar connector yesterday, and if your part has 0.1" pin spacing you will likely find a match in either the Molex KK series or TE MTA-100 series. <A> Adding to other answers, Be careful here, latching connectors by themselves do not guarantee connector alignment if the corresponding female part does not include a mating feature. <S> Fully shrouded connectors or keyed connectors do. <S> The female part usually has a hole-plug/polarizing key accessory you can purchase. <S> That can, in some cases, work out cheaper than fully keyed connectors.
| Another trick to ensure correct alignment that is commonly used is to remove one pin from the header and plug the associated pin on the female part.
|
Light bulbs are blinking when controlling powerful heater with PWM via SSR I'm creating a diy sous-vide rig using esp8266, a tubular heating element (1.5 kW) and a solid state relay (zero- cross, Fotek SSR-25DA) I use a relatively powerful heating element, so that the water can quickly reach the desired temperature.However, once it's reached, I want more granular control, so I use 10 Hz PWM to open and close the SSR. The problem is, my ceiling lights start to flicker when the duty cycle is not 100%. I think this is due to the starting current of the heater, but I don't know how to approach fixing it. Maybe I should increase the PWM frequency? But I'm not sure it will work at all, considering that the SSR is zero-cross.Or should I add some fat capacitor in parallel with the heating element? I want to be able to limit the heating power because it takes some time (about 300ms) to get the temperature from the sensor (ds18b20), and leaving the heating element on for that time on full power easily overheats the water. Here are some specifics: Water volume is about 3L The readout time for the sensor is 375 ms Typical temperature range is 60-65 degree Celsius I use the following cycle: read the temperature (t) if t >= target set duty to 0 if t < target - 3 set duty to 10/10 if t < target - 2 set duty to 7/10 if t < target - 1 set duty to 5/10 if t < target - 0.3 set duty to 2/10 else do nothing <Q> Add noise or a signal large enough to thermistor feedback . <S> Instead of PWM, rely on ZCS to skip cycles and get proportional feedback with cycle skipping modulated by noise and ZCS for so sinusoid step current which ceiling lights may withstand. <S> Choose noise level to match your proportional range 1 to 10 deg or so <S> depending on dT (‘C) in 300 ms Line frequency noise may or may not be ok. <S> In the late 70’s I had a waterbed heater. <S> I designed OpAmpwith some AC noise <S> so the relay switch so that it would skip cycles every 10s to 10 minute quietly with high derating to last 100k <S> ~1M cycles to regulate within 0.1’C <S> Dual thermistor is better due to sense errors near heater. <S> added <S> If I assume you have 230Vac at 1.5kW or 6.5A load causing lights to flicker, I wonder why? <S> are they LED or FL tubes with sensitive to line regulation errors? <S> Then certainly higher PWM will work better with a different power supply , e.g. http://www.ti.com/tool/TIDA-00779 A more ideal solution. <S> Active Power Factor Correction, for noise immunity to interference to others and acceptable conducted & radiated noise, easily regulated and noise compatible. <S> (good EMC design) <A> If your thermal response time is that small (<0.3S) <S> you need to change your design, Faster PWM won't help if you can only measure at ~3Hz. <S> You could use some form of TRIAC circuit with per cycle switching <S> would get you more granularity <S> but it likely will not help the power transients that are affecting your lights. <S> But, again, your limiting factor is your measurement cycle time. <S> What you really need to do is drop the power <S> / current you are switching. <S> You may be better off with two heater elements. <S> A "quick-boil" to get you close to temperature, and a smaller "simmer" element, that uses less current, you can use to hold the desired temperature. <S> ADDITION: <S> If you extrapolate that idea, you can also design it with multiple heaters, that is splitting up the big one, and then sequence them on rather than just turning on one big heater. <S> Doing that will reduce the surge current and hopefully stop the lights from dimming. <S> ADDITION 2 Since you are apparently trying to hold the temperature to a tight temperature tolerance, <S> thermal lag in the heater to water transfer mechanics will be an issue. <S> That is, when you turn the element off it will continue to add heat to the water for some period there after. <S> That is another reason to split up the heater. <A> A little maths: $$ t = <S> \frac {m <S> \times \Delta T \times SHC}{P} <S> $$ where \$t\$ is time taken in seconds <S> , \$m\$ is mass in kg, \$ <S> \Delta T\$ is temperature change in K (or °C), SHC is the specific heat capacity of the mass in kJ/kg/K and P is the power (kW). <S> For your 3 L of water the time taken to raise the temperature 1° <S> C is $$ t = <S> \frac {3 \times 1 \times 4.2}{1.5} = 8.4 \ <S> \mathrm <S> s $$ <S> We can easily use a zero-cross controller here with a 1 s duty cycle and maintain the temperature close to setpoint. <S> Figure 1. <S> Zero-cross duty-cycle power control. <S> Source: my answer to A question on zero crossing versus random-fire SSRs . <S> Note that your controller is running asynchronously with the mains (it doesn't know where the zero-cross is) so the SSR will delay turn on and off to the next zero-cross. <S> Due to the likely random nature of this it should all average out to give the desired precise control. <S> You will have 100 or 120 zero-crosses per second <S> (50 / 60 Hz) giving you a rough 1% resolution on power control. <S> Looking at your code <S> I suspect that your control algorithm isn't good enough. <S> For an introduction have a look at my answer to Understanding the flow of a PI Controller? . <S> Figure 2. <S> PI control response for a car cruise control illustration from the linked article. <S> I'd try setting the proportional band to about 10°C and integral time to 60 s for starters.
| It might be time to look into PI, proportional-integral, control.
|
Identify this component? Temperature-related? Can anyone tell me what this is? I have nothing else, sorry - just the part. No markings or labels on it. Came out of a Brother laser printer- I was clever enough to salvage neat-looking parts from it, but not clever enough to document where each was taken from... There was a heating element inside one of the rollers. Near that roller were: 1) a thermostat (CH-152-35), 2) this thing, 3) a G16C Thermistor in an amber sheet housing with a black base similar to the one on the posted item. Exact locations unknown, I didn't document it. But all near the heated roller. Black marks are toner residue not burn marks. Wiped them off the outside of the amber plastic. A better look at the side not shown well in the last pic. The thermostat. I love part numbers , makes newbie research easier. The thermistor. Again, I love it when they print useful part numbers on stuff that you can Google! <Q> It is a thermistor. <S> I am posting HOW I found out, for any future readers: <S> The problem was that I and others here were focused on the picture of the part itself. <S> I changed focus to the printer (in my case a Brother HL-5240), and any documentation I could find. <S> I accidentally found the service manual for it. <S> In that manual were the circuit diagrams. <S> It took a very long time for me to read through them and figure out how to follow the circuit as it went through a few different drawings. <S> (I'm a novice and this is the first large diagram I've tried to use - 9 pages of drawings!). <S> Eventually I found a symbol for a thermistor (the resistor zigzag symbol inside a closed oval), and googled the number next to it: "TP835". <S> And sure enough - I got Google hits with pictures of similar parts. <S> Had to look at a bunch, and eventually I found matching items. <A> It looks like it could be a bimetallic temperature switch. <S> These are made from two metals that have different expansion coefficients so that they bend when heated up. <S> They are often used to turn something off when it gets too hot. <A> The two leads imply that it is a temperature or pressure detector, changing its resistance with pressure or temperature. <S> The dark spot is the sensing element. <S> The long copper leads are just for spacing, and an elastic behavior, which leads me to think it is for position sensing. <S> It may be functional if the OP uses an ohm meter to test it. <S> NOTE <S> : If the OP says there is a temperature sensor already there, then this device is not likely another temperature sensor. <S> The material looks damaged-metallic spray, as if it were a fusable link designed to fit in a tight spot. <S> If so it is a custom design. <S> Picture of molten metal with scorch or burn marks: <A> From the heavy crimped-on connector lugs in the thick white wiring, it seems this is a spot heater. <S> The small thermal sensor possibly senses near the spot heater-sensor looks like a solder dot.
| The can is similar to a variety of thermostats (switching accordingto temperature) that might be associated with whatever is being heated. If so, it would be the tubular 'bullet' type used as an emergency cutoff. The answer wasn't forthcoming by visual inspection of exchange members, and my prior Googling had not yielded useful results. It looks like it is supposed to be connected to some external contact on the black rounded bump.
|
Unpolarized Capacitors in place of Polarized ones I have found some projects of interest, and on them there are generally a polarized capacitor in the range of 10 multiple (0.1uf, 1f, 10uf, 100uf, etc), and I do not know why I ordered a lot of polarized capacitors in the 47 range (0.47uf, 4.7uf, 47uf, 470uf and 4700uf), but unpolarized ones of the 10 multiples range. If I place a unpolarized capacitor in place of a polarized one, can it discharge the wrong way, and burn my circuit? <Q> It will not damage your circuit, but there are many other characteristics of capacitors that are important to a circuit working: voltage rating, current rating, equivalent series resistance, dc bias derating, etc. <A> No, no such thing will happen. <S> But you have to be careful for another reason: <S> A schematic will specify capacitor types, e.g. aluminium, tantalum, paper, polypropylen, polyester, ceramics of certain types. <S> It will also tell if there are special ratings to follow, e.g. for ESL and ESR, and X and Y safety features. <S> You can drop in polypropylen caps for small aluminium ones usually. <A> You may not know that there are millions of different capacitors with a dozen different specifications which may or may not matter in any given circuit. <S> Even in electrolytics and ceramics there a dozen different families which tradeoff different parameters such as; Cost, Size, Voltage rating, tan delta @120Hz, ESR, ripple current , rated voltage, leakage current, temp. <S> vs accelerated short life rating. <S> e.g. 1000hrs <S> at 105'C, self resonanct frequency, capacitance vs V vs 'C and shape, aspect ratio, value AND tolerance. <S> It depends on the application demands or specs for impedance(f), current (rms) and Vpk/rated ratio, leakage equiv R, ripple voltage, f attenuation. <S> etc <S> Will it work? <S> to do what? <S> For electrolytics; there are General Purpose polar, non-polar, low ESR, ultra low ESR, high temp, ultra high temp, <S> high ripple current. <S> So for general purpose applications, no problem. <S> Due to impedance Z(f)=1/(2pi*fC) <S> and Self Resonant frequency (SRF), <S> some designs limit the useful range of e-caps to a few decades in f, and ESR dops with rising uF <S> so SRF also rises with smaller uF <S> ( family sensitive). <S> Thus it was common in power filters to use 0.01(ceramic)//1uF/100uF <S> 0.047/4.7//100uF//1mF depending on design and surge currents and rate of dI/dt. <S> There is no one solution to all problems.
| If there's nothing marked in the schematic, you can assume all unmarked unpolarized caps are either ceramic or polypropylen, and all unmarked polarized caps are aluminium ones.
|
How do manufacturers limit bandwidth in oscilloscopes? I would like to know how do manufacturers limit the bandwidth through software options in oscilloscopes? If i have a MSO with 1GS/s, but e.g. just have 70Mhz out of possible 200Mhz (via software option), do the digital channels sample at 1GS/s or are they as well slowed down? I always read of bandwidth upgrades, but the sample rate seems not to be affected. Is this correct? And how do manufacturers limit the bandwidth? Just by not enabling the necessary horizontal scale or how does it work? Thank you! <Q> You should realize that sample rate (1 GS/s) and Bandwidth (70 MHz) are different things !!! <S> They are related in that a certain sample rate dictates the maximum bandwidth of the signals which can be sampled accurately. <S> This is set by the Nyquist frequency <S> The Bandwidth of the oscilloscope is most often limited in the frontend of the oscilloscope. <S> The frontend is the input amplifier including protection circuits and range switching (which changes the voltage gain of the frontend). <S> There might also be an anti-aliasing filter (a low-pass filter) present. <S> On way to make the frontend's Bandwidth changeable by software is by simply switching on/off a capacitor. <S> This is done in the Rigol DS1054Z as shown by Dave from the EEVBlog in this video . <S> That capacitor can simply be part of an RC Lowpass filter (the anti aliasing filter!) <S> which sets the Bandwidth. <S> It is theoretically possible to (also) limit the sample rate and/or do post-processing to limit the bandwidth but this can result in aliasing effects and requires processing power. <S> Switching a capacitor is much, much simpler. <S> Also, that would limit the oscilloscope's bandwidth in the same way as it always has in analog oscilloscopes. <S> You can view a 100 MHz signal on a 70 MHz oscilloscope <S> but the 100 MHz will be attenuated. <S> So you might measure 1 Vpp while the signal is really 1.5 Vpp for example. <A> There is a hardware low pass filter to prevent aliasing, probably with its -3dB point at 200MHz. <S> That low pass filter must attenuate everything above 500MHz (the Nyquist frequency) enough to prevent aliasing. <S> The sample rate is always 1GSa <S> /s. <S> Then there is a software low-pass filter to limit the -3dB bandwidth to what you have purchased. <S> You can still see signals above 70MHz, they're just attenuated. <A> Modern Digital oscilloscopes have several fundamental blocks that process probe input data in a chain before the trace gets displayed. <S> Front End - analog circuitry with programmable attenuation and offsets etc. <S> Depending on gain settings, it might have somewhat different bandwidth. <S> And "bandwidth" is a stretchable concept, the transfer function can have gradual decline, not just "-3dB cut-off", which can be corrected later in the processing chain. <S> Sampling/ADC unit. <S> In many cases the ADC samples the signal at constant (and rather high) rate, above the Nyquist frequency of anti-aliasing filter of the Front End. <S> So the signal is usually oversampled. <S> The rate of data storage however can be "decimated" in the process. <S> Data storage (memory) for raw data. <S> Fast memories are required to store the input stream from ADC unit. <S> Display Unit. <S> Before the data are displayed, modern scopes have the signal digitally processed. <S> So you can correct the uneven input characteristics, put interpolations, adjust scales in all directions, and run various measuring algorithms. <S> This is all in post-processing software, to put a nice picture on LCD. <S> So there are many options for software/firmware to change/extend basic scope characteristics, depending on how much you are willing to pay. <S> Most notably the "software upgrades" are used in configuration of depth of data storage. <S> Scopes might have the super-fast memory soldered down for the maximum already, but software enables only certain portion of it. <S> And to get full memory, you might need to purchase a special license to use it, and it might be sold on expiration basis. <S> Regarding bandwidth "upgrades", if the software-based "upgrade" is offered, then the scope has a full-featured front-end that meets highest advertised parameters. <S> However, good quality analog circuitry in high-MHz area is expensive. <S> Manufacturers of analog-to-digital components usually have their ICs binned to different grades, and prices vary substantially. <S> It is possible that less expensive version of a scope has the binned-down front-end components, which can be upgraded only by upgrading the hardware module.
| The bandwidth is limited by a low pass filter.
|
Parallel a diode to solid state relay load is a short circuit? I'm trying to use a solid state relay control a 12v dc air pump, so I was reading the "Solid State Relays Common Precautions"(like below). http://omronfs.omron.com/en_US/ecb/products/pdf/precautions_ssr.pdf When I read the paragraph shown in the screen, I'm wondering if I parallel a diode to the load, I'm I making a short circuit since there won't be much resistance in the dioade so the current will just bypass the load and go through the dioade? <Q> When connected as shown, it is reverse biased until the SSR turns off, upon which the inductive load will cause the diode to become briefly forward biased until the energy in the magnetic field is absorbed by the coil resistance and diode forward drop. <S> Otherwise the voltage across the load would increase to potentially damaging (to the SSR) levels. <S> Consider the below schematic- <S> the switches represent the SSR and the stuff in the boxes represents your pump motor. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The switches open at t=0 and current is 1A through each circuit at that time. <S> As you can see in the below simulation, the voltage across the switches increases to a bit over 12V and then drops back to 12V when the diode stops conducting about 2.5ms later. <S> If I remove D2, the difference is dramatic: <S> The voltage across SW1 spikes to about 900V, before dropping back. <A> Diodes are applied in reverse of the applied voltage. <S> If you look closely, it is a ground side switch and probably an opto coupled N-FET. <S> So return or back EMF from stored energy when power shutoff is the only time when the diode becomes forward biased. <S> The current rectifier acts as the 2nd "Throw" in a SPDT switch but in this case just "quenching" the same current and dissipated into all the series resistance for a short time, defined by the T= L/R ratio. <S> where R includes the diode series bulk or ESR and the load series resistance. <S> Motors are rated with DC resistance or DCR. <A> Since you are concerned about the diode, I thought I'd start with helping you get more comfortable with the circuit you posted. <S> On the left, below as Figure 1, is the equivalent circuit -- just drawn a little differently, is all. <S> If you remember about diode directions, the diode is arranged so that it is reverse-biased in this circuit. <S> So it should not cause any troubles. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> On the right side, Figure 2, I show a way you can test this out in a safe way. <S> Do NOT hook up the pump motor. <S> Just apply the diode itself as you think it should be. <S> Activate the SSR and then measure the voltage across \$R_1\$. <S> If the voltage measures <S> \$5-6\:\text{V}\$ <S> then the diode is indeed reverse-biased. <S> If you measure anything much more than \$6.5\:\text{V}\$, then you have the diode in the wrong orientation and should reverse it and take your measurement again. <S> If the diode is good, then one of the two orientations will give \$5-6\:\text{V}\$. <S> That's the arrangement you want in the end. <S> The article you cite also mentions the use of a zener diode. <S> I'll draw that up as well as another possibility: simulate this circuit <S> In Figure 3, I've added a zener with the correct orientation. <S> It looks like it is arranged to be forward-biased when you activate the SSR. <S> But diode \$D_1\$ prevents that. <S> So the only time the zener's action occurs is when \$D_1\$ is forward-biased when the circuit turns off and the pump's inductance "kicks back. <S> " <S> The reason for adding the zener is to provide a faster "turn off" time. <S> If you don't need it to be faster, then you can avoid the zener. <S> In Figure 4, I've added another possibility. <S> (And there are still many others.) <S> Here, if you can work out about how much current your motor requires, you can size that resistor to use about the same current. <S> This will help dissipate energy faster in the circuit and will also reduce the turn-off time.
| You need to connect the diode the correct way (reverse biased, as shown), otherwise it will indeed conduct when the SSR turns on and likely destroy the SSR and quite possibly the diode.
|
How to solder PCB? I have maybe question because of my first PCB I soldered yesterday. I used a 2x8 cm PCB, and this is the circuit I did: A I J K L M N o-- -D1-—o--SW1--o---VCC o | D1 SW1 REL Ad 5V X---o D1 D2-SW1--o---CH1 o | D2 | REL X o R1 D2—-R2 R3 GND o R1 R2 R3 | X o R1 o R2 R3 o o R1 R2 R3 | Ad GND X---o R1 o R2 R3 o o | R1 R2 R3 | O-- -R1--o---R2--R3--o o A .. N are the columns of the PCB, I leave column B..H free for future additions (the VCC and GND lines are connected) Dx are diodes, Rx are resistors, SW1 is a pin header for a switch and column - Column N is a pin header for a relay (module). X are the connection 'terminals' of the PCB (on the left/right side, right side is unused) However, during soldering, I noticed a few things and wonder what is best: I had a lot of soldering to do from one hole to an adjecent hole, and sometimes more like the connection between SW1, CH1 and R3 (colums K, L, M). Since I used just soldering, it was like a big 'solder blob' ... is it best to use some small wire instead? It will be very tiny wire(s) For the long VCC and GND lines I used a wire which I bent (see column A and the Xes) and soldered them on various places. I noticed it was very hard to solder adjecent lines (components close together), but leaving more space needs longer lines (and have to use wires instead of just solder?) What are guidelines to make those 'interconnections' between adjecent holes? And a side question: this is a 'double' sided board, but I don't see what it means, since the holes are connected anyway from the top and bottom side after soldering. Or do I miss something here? Update 1 There was a discussion about my 'ascii' notiation... I will explain it a bit below. The problem is, that I never have soldered on a proto/pcb/stripboard whatsoever, only did breadboarding. Since I want to be sure I don't mix up lines/columns, I like to make it visible before I start (and to see it fits). I will leave the ASCII text above, however, to make it clear, I thought it's better using Excel. I also spaced out the colums more, so it's easier to solder. The result is below. <Q> is it best to use some small wire instead? <S> Yes, definitely. " <S> Solder blobs" tend to get bad over time. <S> Or use the solder-tinkerer's all-time favourite: cut-off through-hole component legs. <S> They are awesome for such purposes, if you don't need/want the isolation provided by a wire. <S> Regarding the nature of the PCB, there's different kinds, either with pads ( <S> that may or may not connect to both sides) or with "copper lines", where all holes along one row are connected. <S> Which kind to use mostly depends on the circuit, and on personal preference. <S> The version with "lines" is convenient when tinkering with through-hole ICs, but in the general case I personally prefer the version with "pads", as that gives more freedom. <S> Often when building lab stuff you need to modify something, and it is convenient to do so without having to cut and carve in the actual PCB. <A> And a side question <S> : this is a 'double' sided board, but I don't see what it means, since the holes are connected anyway from the top and bottom side after soldering. <S> Or do I miss something here? <S> Yes, the green PCBs have their holes connected on both sides, before and after soldering . <S> Calling it "double sided" is very misleading, but "true" in some sense. <S> This question will gain several opinion-based answers, anyhoo... disregarding that, this is what I recommend. <S> Might be interesting since I use the exact same boards as you. <S> larger image <S> Circuit I made and used as blueprint (ish) <S> (Ring inverter with LEDs on the other side) As you can see, I don't limit myself only to the plane of either side, I also go up a couple of millimeters and use that space as well. <S> The metal pieces are from trimmed LED legs and large round resistors - <S> no, I haven't ruined tons of LEDs <S> so I can have small legs that work great with soldering. <S> I simply don't throw away the legs of LEDs, I reuse them and cut them accordingly. <S> larger image <S> Circuit I made and used as blueprint (Half bridge) <S> But using regular uninsulated copper wire works great too. <S> Notice how I use the holes as a meeting point for two or more components. <S> You usually don't need to space things apart. <S> SOT-23 is great for 2.54 mm spacing. <S> (Modified version of <S> this one <S> that actually worked IRL) <S> I assume you are going to work with ICs - use their legs as well. <S> larger image <S> Circuit I made and used as blueprint <S> It's ugly, but it's your ugly thing. <S> And you can't say that something you made is ugly, therefore it's handsome. <S> You probably cannot see it, but legs 3 and 5 are connected underneath the IC and soldered. <S> (IC = LM393) <S> On the other side there are two transistors, and again I use the holes as nodes / meeting points. <S> I rarely connect pins to different holes and then bridge the connection. <A> As you noticed the solder blobs look real messy. <S> Using wires with a plastic coating is problematic and too much work unless you have large distances. <S> You have to cut the wire to length, strip the ends and hope you didn't cut to short or long. <S> What I recommend is Magnet wire . <S> Which is wire that has thin varnish. <S> Usually (and you want that) <S> the varnish is heat sensitive. <S> So just by touching the end with the soldering iron for a few seconds the varnish disappears and you get blank wire at that point. <A> I sometime use silver wire or (insulated or not) copper wire for the interconnections. <S> When the copper pads you want to connect are directly next to each other <S> you can use a 'solder blob' (2.54 mm grid or less). <S> In my opinion it looks much cleaner if you use a silver wire which you stretch a little bit (make it straight) for wider interconnections. <S> Double sided means that you have copper pads on both sides. <S> The advantage over pads on just one side is that you are able to solder on both sides. <S> With one sided you can only place your components (throughhole leaded) on one side. <S> This is the case for printed circuit boards with a point grid or another grid.
| Using Magnet wire you can simply solder on the wire and cut the ends after soldering and you don't have to care about causing a short when the middle of the wire touches anything else.
|
Dim and "unblink" a pc power LED? I just built an HTPC. It's got a bright white power-on indicator LED that is in fact needlessly bright and, which is worse, blinks when the pc is in suspend mode. I want it to be less eye-catching. I suppose I can wire a resistor in front of the LED to dim it, but I've no idea how to select the proper value. (and/or) Can I wire a capacitor in front of the LED to make it's "very binary" blinking into a somewhat smoother wave pattern, sort of pulsating? I really don't care about the specific wave form, I just want it to draw less attention (making it glow constantly, but dimmer, in suspend than in power-on would be just fine, if that's a simpler thing to do). I've no idea about the ratings of the components involved. I'm sure the LED is being driven at 5V (I can check), and I suppose it draws somewhere between 20 and 200 mA. Can you help a feller out with some component choices based on such poor specs? Update: I have soldered together a 2200uF capacitor and two 100k potentionmeters -- see photos here . I have tried to recreate Spehro Pefhany's diagram (dead bug style, it's gloriously hideous) and I can report a 50% success: the dimming works nicely, but the blinking seems to be completely disabled -- it's always on (at whatever brightness I choose) and I can't detect even a hint of variance regardless of how I adjust the 2nd pot. Have I (despite triple-checking) put it together wrongly? What should I change to restore at least some blinking? <Q> Total guesswork, but brightness is perceived logarithmically so the exact resistor values are not critical. <S> You could try something like this: simulate this circuit – Schematic created using CircuitLab <S> The 2200uF/6.3V electrolytic capacitor needs to be installed with the correct polarity. <S> Eg. <S> Nichicon UFW0J222MPD. <S> If you don't like the way it works, change the values (perhaps in steps of 2 or 3:1, don't bother with much less of a change). <S> If you're purchasing parts, get a few values- <S> you can put resistors in series or capacitors in parallel to increase the values. <S> The "attack" time constant is of the order of RC where R is 50K or less, and it will decay visually slower because the LED current will drop. <S> 2200uF and 25K-50K is <S> only about 20-40 ms <S> so it won't make much difference turning on, but will seem a bit less abrupt turning on and especially going off. <S> If you decrease the resistors to get more brightness you'll have to increase the capacitor proportionally to get the same time constant <S> and you'll rapidly run out of practical sized capacitors. <S> In such case it might be better to try just a single resistor. <A> I'd use a 5k potentiometer: <S> simulate this circuit – <S> Schematic created using CircuitLab Adjust brightness as desired by turning the knob. <S> If you want to be fancy, mount the potentiometer so that it is accessible from outside. <A> First of all, measure the voltage just to be sure of the LED supply voltage. <S> If you can, find the resistor that is limiting the current, then measure the voltage drop across the LED. <S> It shouldn't be too difficult to find if you get a multimeter with a continuity buzzer and can get a probe on the LED. <S> This will give you the necessary numbers to calculate the current driving the LED. <S> Then you can just use a simple calculation to find the resistor you need to limit the current to whatever you want it to be! <S> Rled = <S> (Vs-Vf)/I, where Vs is the supply voltage and Vf is the forward voltage of the LED. <S> You could play around with some values, or attempt to calculate it, by using the capacitor discharge equation (V = Vs*e^(-t/RC)) and transpose for C...... but that may be a bit too time consuming, so maybe just play around till you get something you feel is good! <S> There may be better ways, but this is how I would do it. <A> It's quite simple: - <S> No soldering, nor CE labels, no messing around. <S> No DVM required for testing. <S> Plus you got refills by the dozen so next time <S> you have a too-bright LED it costs nothing but time remembering where you slid the pack of these little beauties. <S> It costs about 4 quid from here and here's the originator's web site. <S> Don't try it with high power lasers though (I mean... <S> who would?).
| I'm not too sure about wiring a capacitor in front of the LED, it would depend on the resistor in front of it, and how quick the LED blinks.
|
What determines the maximum size of a cpu cache? Looking at a list of the very latest CPUs, I see several of them with a cache size of 12MB or 8MB - pretty small, when compared to the ever-increasing size of hard drives and ram. It seems to be taken for granted that a cpu cache will always be small, but, why is that? Is it just economicaly unfeasible, or are there engineering reasons why it has to stay small? I'm thinking of some cryptocurrencies (i.e. ethereum), that are designed to be memory-hard, so the speed of the algorithm is limited by the IO bandwidth of the memory, with the idea that this makes it impossible to design a custom chip specifically to solve that algorithm as was done for bitcoin. But, if someone was making a custom chip anyway, couldn't they just cram a gigabyte into the cache and do away with the IO bottleneck? <Q> It's a trade-off between the higher hit rate of a large cache and the faster speed of the smaller cache RAM. <S> Hit rate follows a law of diminishing returns as the cache gets larger. <S> Doubling the size of a large cache may only less than a couple of percent increase in hit rate <S> but it'll certainly increase its access time, which slows CPU throughput. <A> The issue is structure size vs. signal propagation speed. <S> If you build a larger cache, it takes up more space physically, meaning the length that the signals have to travel increases, which reduces the maximum clock rate the cache can be run at. <S> The L1 cache needs to run synchronous to the CPU in order to be useful, so the size of the cache is a limiting factor in clock speed. <S> Other cache layers can be larger and run at slower clocks, but this requires the CPU to wait until the cache answers. <S> The GPU approach is to have lots of threads per core (i.e. a SMT factor of 16 or larger, compared with 2 on Intel CPUs or 4 on POWER). <S> This means that each individual thread will only run at a fraction of the clock speed, but results for a memory access are not required until all the other threads have begun their memory access, at which point the result for the first access should be ready. <S> This is why GPU mining is interesting for these coins. <A> Gigabyte-sized memory (DRAM) don't use the same manufacturing process as CPUs (logic), so you can't have both within the same chip, unless you are willing to make compromises that would make the whole thing inefficient. <S> Moreover, big cache memories uses content-adressable cells, which is even more complicated to make. <S> In short, making a gigabyte-sized cache is not really doable economically. <S> But anyway, even if you were able to do it, it wouldn't solve your problem. <S> You would maybe gain a few cycles of latency, but would still need to have a bus between the processor and the memory, even within the chip. <S> So that would be your bottleneck in the same way it is on a regular computer.
| Cache memory uses static RAM, which is more alike logic, but which takes more silicon space than DRAM, so you can't have as much, practically.
|
How to compare footprints of SMD? How easy compare dimensions and footprints for different size SMT components. I mean how compare footprints of 0603 with 0402 size components. For example. I want to know is there possible to use 0402 component on footprint 0603. Or some another parts. In which software or web servises I can do it? <Q> It completely depends on how the footprint for the 0603 part was designed. <S> Although there are recommended sizes for all of the solder pads for each component, some PCB designers deviate from those standards for particular reasons. <S> In particular, the pads are often elongated towards the middle of the component precisely to allow a smaller component to be used on that footprint. <S> This is often done during the prototyping stage to allow maximum flexibility when working with components that are on hand. <S> I would not expect that an automated Pick and Place machine would be able to do this. <A> If all you are looking is an image, this might be a good comparaison : <S> In this image, on the left side, on bottom is the 0402 <S> On the right side, the blue one is the 0402 <S> The red and blue rectangles are the pads for soldering. <S> As you may see, they cross each other, you might be able to solder a 0402 to a 0603. <S> Edit : This image was made in EasyEDA , which I should recommend if you don't have a software to work with. <S> Edit 2 : Changed the picture, the package was wrong <A> I used Eagle sowtware for comparison this two packages. <S> And it look like we can use 0402 component on 0603 footprint. <S> But maybe only by hand soldering. <S> There is how its look 0603 footprint and two components with dimensions 0402 and 0603:
| That said: you can often make a 0402 component work on a 0603 footprint IF you are placing and soldering the component by hand.
|
Using Capacitors on a PCB So I have built a custom PCB with a NRF24L01+ 2.4Ghz transmitter breakout board on it. I have heard the modules are incredibly sensitive to input voltage, so I have it connected to its own dedicated 3.3V regulator and a 100uF tantalum capacitor. When I turn this board on, I still have a significant amount of drops per minute. When I take the tantalum capacitor off the SMD pads and solder a 100uF Aluminium Electrolytic capacitor on the pins of the breakout board, the problem seems to go away. I have the schematic and board shown below. After doing a significant amount of testing, I have found the following results and plotted them on the following graphs. I let each test run for 10 minutes and then averaged the drops per minute. I kept one of the NRF modules on another PCB connected to a Aluminium Electrolytic capacitor on its pins, and changed the others capacitor configuration. Can anyone tell me why this is? I can't figure out why just moving the capacitor from the SMD pads to the pins of the NRF increases stability. I also don't understand why an Aluminium Electrolytic capacitor would be better than a tantalum capacitor, but that seems to be what the data is showing. Another issue that I found is that when no capacitor is connected to NRF module at all, this line of code stalls and stops. Anyone have a clue why that might be? if (!radio.write( &myData, sizeof(myData) )) { // Send data, checking for error ("!" means NOT) Serial.print("Transmit failed "); //When Capcitor Not Conencted, Code Freezes in this if Statement;count=count+1;RF_Flag=false; } If anyone can help me with any of these issue that would be greatly appreciated! I cannot seem to figure out why this is. <Q> Without more details on the capacitor specifications we can just speculate on the exact reasons, however there is one problem that clearly jumps out. <S> Technically speaking, your power traces are crappy. <S> Their resistance and inductance would be too high for a power supply trace. <S> This introduces power droops and ringing into the system. <S> If you have too good of a capacitor (low ESR) ringing will become a problem (this could be why tantalum is worse) and, regardless of the capacitor type, the thin segments of trace between it and the connector will introduce inductive and resistive droops in the power supply. <S> Add some wire between the connector and the capacitor to reduce trace impedance and start from there <S> Oh, and regarding that stalling line of code. <S> If the power distribution to your microcontroller is as bad as in that small section you posted, it’s very likely that power to it is dropping too much when the RF transmitter turns on. <A> It could be due to: <S> Capacitor ESR <S> The cap with lower ESR (within reasonable limits) will provide the best decoupling. <S> The tantalum is specced around 1-1.5 ohms ESR, and a junk alu cap from the bottom of the drawer would be about the same or higher, unless it is a low-ESR model. <S> So, this isn't conclusive. <S> Also the RF chip's current draw is pretty low, couple tens mA, so this shouldn't matter. <S> Layout <S> Your power and ground traces are really thin, which increases resistance. <S> This layout looks like it was done by autorouter, which is not a good sign... <S> Connector malfunction <S> When the capacitor is "on the pins" it works better than when soldered on the board. <S> This isn't normal. <S> Of course, fixing the contact problem is the correct solution here. <S> Regulator instability <S> This depends on regulator, caps (including ones on module) and layout. <S> Having a scope would definitely help to check VCC voltage drop when the module transmits. <A> Should have used a ground plane on top & bottom layers vs running 6 or 8 mil wide ground trace around the board. <S> Power trace should also have been a lot wider.
| If the connector is defective and doesn't make a good contact, a capacitor soldered on the module would work better than on the carrier board (on the other side of the defective contact).
|
AC-circuit with resistors and capacitors in series and parallel Is there any frequency for E, where the potential between A and B is zero in this circuit? I tried using the jw-method to get the current, but it became to complex which tells me that there must be a shorter way. <Q> Don't try to calculate any currents. <S> The circuit is set up as 2 voltage dividers. <S> Calculate the voltage at A using the usual formulas for the impedance of a capacitor and a voltage divider. <S> Then calculate the voltage at B using the same formulas. <S> Equate the two voltages (since if they are equal, there is no potential between A and B) and solve for the frequency. <S> The answer is very simple in terms of R and C. <S> Then substitute the given values of R and C to find the actual frequency. <A> Thanks for the help! <S> Voltage dividers: (1) $$\frac{1}{Z_1}=\frac{1}{Z_C}+\frac{1}{R} <S> \Leftrightarrow Z_1=\frac{R\cdot <S> Z_C}{R+Z_C} $$ <S> $$ \frac{V_A}{E}=\frac{R}{Z_1+R}=\frac{R}{\frac{R\cdot <S> Z_C}{R+Z_C}+R}=\frac{R}{\frac{R\cdot Z_C <S> + R(R+Z_C)}{R+Z_C}}=\frac{R(R+Z_C)}{R(R+2Z_C)}=\frac{R+Z_C}{R+2Z_C}$$ (1) $$ \frac{V_B}{E}=\frac{2R}{(R+Z_C)+2R}=\frac{2R}{3R+Z_C}$$ <S> $$ V_{AB}=0 \Leftrightarrow V_A=V_B$$ <S> $$\frac{R+Z_C}{R+2Z_C}=\frac{2R}{3R+Z_C}$$ <S> $$(R+Z_C)(3R+Z_C)=2R(R+2Z_C)$$ <S> $$3R^2+4RZ_C+Z_C^2=2R^2+ <S> 4RZ_C$$ <S> $$R^2=-Z_C^2$$ \$Z_C=\frac{1}{j\omega <S> C} \Rightarrow \$ <S> $$R^2=-\left(\frac{1}{j\omega <S> C}\right)^2=\frac{-(-1)}{(\omega C)^2}=\frac{1}{(\omega <S> C)^2}$$ <S> $$ <S> R=\frac{1}{\omega C}$$ <S> $$\omega=\frac{1}{RC}$$ \$\omega C=2\pi <S> f \Rightarrow \$ <S> $$f=\frac{1}{2\pi RC}=\frac{1}{2\pi <S> \cdot <S> 10^ <S> 3 <S> \cdot 10^{-6}} \; Hz=\frac{500}{\pi}\; <S> Hz\approx 160 \; Hz$$ <A> Look carefully. <S> The voltages \$U_a\$ and \$U_b\$ are the same if $$2 \cdot (X_C||R) = <S> X_C <S> + <S> R$$ $$2 \cdot \frac{X_C <S> \cdot <S> R}{X_C + R} = <S> X_C <S> + R$$ $$2 X_C R = <S> (X_C + <S> R)^2 $$ $$2 X_C R = X_C^2 + 2 X_C R + R^2 $$ $$0 = <S> X_C^2 + R^2 $$ <S> $$- X_C^2 = <S> R^2 $$ With <S> \$X_C = <S> -\frac{1}{j\omega <S> C}\$ $$-(-\frac{1}{j\omega C})\cdot(-\frac{1}{j\omega C}) = <S> R^2$$ <S> $ <S> $-((-\frac{1}{j})^2)\cdot(\frac{1}{\omega C})^2 = R^2$$ <S> $ <S> $-(j^2)\cdot(\frac{1}{\omega C})^2 = R^2$$ <S> $$1 \cdot(\frac{1}{\omega <S> C})^2 = R^2$$ <S> $$\frac{1}{\omega C} = <S> R$$ <S> It's often easiest to keep \$X_C\$ and <S> \$X_L\$ in the formulas as long as possible. <A> simulate this circuit – Schematic created using CircuitLab <S> We know that $$Z_{C}(f) = <S> \frac{1}{2\pi fC} = <S> \frac{1}{2\pi f <S> \cdot 0.000001}$$ <S> Then, the only remaining thing to do is to find the possible solutions of :$$\frac{R}{(Z_C(f) \parallel R) + R} = <S> \frac{2R}{R <S> + <S> Z_C(f)}$$ <S> When both of the "branches" will look alike at some frequency <S> \$f_{answer}\$, then A and B will be held at the same ratio of \$V_1(t)\$ and hence will have the same electrical potential. <S> If you were interested in current, then you would need to compute the phases. <S> If the source was a current source instead, it would be reversed. <S> The currents would be "easy" to compute but you would need to account for the phases in order to calculate voltages.
| There is no need to compute currents, as for an fixed frequency AC voltage source , the capacitors only "look" like an impedance when you're interested in voltage .
|
Why do lights get brighter when I turn on the hair dryer? Intuitively, the hair dryer sucks power out of the wires, causing a voltage drop, causing the lights to dim. But the opposite happens: the lights get brighter. These are LED bulbs. It is the heating element that does this, not the motor (running the motor with no heat has no effect on the bulbs). The lights are on a dimmer circuit. What causes this effect? I am assuming it has something to do with the LED power regulator or the dimmer, but am curious to know the details. <Q> I didn't think of this but adding to Spehro's addition to KingDuken's comment this might help. <S> The socket might be on L1 and the LED light on L2. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> A North American split-phase domestic supply. <S> Let's say L1 is 120 V and your hairdryer is about 10 Ω <S> so 10 <S> A will flow. <S> Let's say there is a (very high) neutral resistance of 1 Ω then there will be a 10 V drop across R2. <S> Now <S> L1-N = 110 V and L2-N = 130 V <S> so the lights will get brighter. <S> You should be able to prove this to yourself by monitoring the lighting circuit voltage with a multimeter while switching the hairdryer on and off. <A> A high resistance neutral connection will cause more voltage to show up on the other side of the 120:120 circuit, assuming North American mains configuration, just as @KingDuken mentions in a comment. <S> It could be a faulty connection or just a long length of minimum gauge wire. <S> Edit: Charles' comment is 100% correct that wiring to code (North America) will run the conductors back to the circuit breaker panel and the wire resistance on the other side of the circuit breakers should not be an factor (assuming it's wired to our code). <S> So that kind of leaves a bad neutral connection, or the weird possibility of a light that gets brighter as the voltage decreases. <S> That's impossible with incandescent bulbs, but with LED bulbs it's not inconceivable. <A> Let's assume your hair dryer (=its heater) causes substantial AC voltage drop. <S> If the led bulb happens to have a "go where the fence is lowest" -designed controller, the brightening is understandable: Controller X turns T1 ON = <S> > <S> Led current IL starts to increase gradually. <S> X turns T1 OFF when the current IL has reached the allowed maximum (= <S> Voltage over Rs reaches the cut-off limit) <S> IL continues through D1, but decays gradually. <S> When IL is assumed to be low enough, T1 is turned ON again and the leds get a new pulse. <S> The ON state of T1 gets longer, if AC voltage drops, because current IL grows slower when inductance voltage is lower. <S> The OFF-state of T1 has fixed length. <S> = <S> > <S> The average IL grows when AC voltage has dropped. <A> The dimmer may be using triac and a zero cross detection circuit to do the dimming and the lower voltage should be changing the zero crossing point (must be detected earlier than usual). <S> Hence the triac is open for a little longer when the voltage is reduced and thus increases the overall open time, which in turn increases the brightness. <S> Theory <S> The hair dryer will be drawing much current, reducing the overall voltage. <S> Reducing the voltage inturn provide less voltage to the zero cross sensing opto coupler and the signal crossing point (pulse approaching zero) <S> will be sensed sooner due to lower source voltage.
| I would suspect the dimmer which would be causing the lights to get brighter.
|
Can you stack voltage regs for higher voltage? Can I stack these on top of each other http://www.ti.com/product/TPS7A47/datasheet without degrading AC performance? Will the output impedance of the lower ones affect the output impedance of the higher ones since they are serving as the new ground connection or does it not matter because it is an active circuit?Like this <Q> Cascoded (stacked) circuits are not something unheard of. <S> Here is an example from Texas Instruments, <S> However, to have a stable (elevated) ground for the upper regulator U2, the bottom regulator U1 should be able to source AND sink the current, which is not the case with TPS7A47 nor LM1084. <S> Good thing is that the ground current for TPS7A47 is no more than 1 mA, so you might want to load the U1 with more than 1 mA extra load, so you will have some regulated output both ways. <S> Obviously you would need proper load capacitance on U1 and U2. <S> Obviously the output noise will likely go up 40%, because the noise of U1 will be added. <S> Also you might have overall stability issues, due to modifications to loop transfer function due to extra impedance in ground point. <S> Here is one example when this kind of circuit was not successful . <S> The concept of "output impedance" is not used in LDO regulator technology, so you would need to look for quality of regulation (ripple rejection, load transient response, etc.) <S> I don't think they will be affected much if the overall circuit will be stable. <A> Can I stack these on top of each other without degrading AC performance? <S> Consider the regulator at the top of the stack. <S> It will naturally pass a small current through its GND connection. <S> This can be tens of uA to a few mA (like the LM78xx type). <S> For your regulator the GND current is 0.5 mA to 6 mA typically so it is significant. <S> This type of regulator also cannot prevent its output voltage rising if there is a load connected to a higher voltage and this is the case with the regulator at the bottom of the stack. <S> The upper regulator is pushing 0.5 mA to 6 mA into the lower regulator's output and this will cause problems unless the lower regulator has a load resistor fitted to ground that can balance this current out. <S> So you need a balancing resistor: - <S> Will the output impedance of the lower ones affect the output impedance of the higher ones since they are serving as the new ground connection or does it not matter because it is an active circuit? <S> In my words above I gave the solution for the DC scenario but, for AC scenarios you have to live with this problem and ensure output capacitors (and balancing resistor) are all present. <A> I'm trying to regulate the power supply of an audio circuit that is above the voltage rating of the regulator IC <S> I want to use. <S> Be careful about voltages during turn-on and turn-off. <S> For example, this is a 7805 sitting on top of a zener, which turns it into a 7815 which can take 10 more volts at the input. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> However, if the output caps are large enough, at power-on, the input voltage will rise faster than the output voltage, and the max rating of the regulator will be exceeded. <S> Also in case of output short circuit, the max voltage rating will be exceeded, so the regulator's internal protections no longer work. <S> This schematic is better: simulate this circuit Here <S> a very simple BJT/Zener pre-regulator brings the voltage down. <S> Regulator input/output caps depend on reg's datasheet. <S> R1 should have proper value to give Q1 enough base current (you can also use a darlington or a MOSFET). <S> Filtering the zener reference with a cap isn't necessary, but it'll give you a bit of extra HF PSRR. <S> Global PSRR will be improved by 20-30dB relative to the original regulator, as Q1 will filter the input voltage. <S> Output impedance, noise and other characteristics like load response will be the same as your regulator. <S> Also this splits the dissipation in two. <S> You should try to set the zener voltage for equal dissipation in transistor and regulator. <S> The same amount of power dissipated in two components is easier to get rid of than in one component (you get 2x the area of silpad). <S> The regulator's internal protections still work and will protect Q1. <S> But if voltage allows, you can use a LM317HV, it's the simplest option.
| Yes, the output voltage at the top of the stack is affected by the stability of output voltages at the bottom of the stack.
|
Eliminating or reducing the red glow from IR LEDs at night tl;dr; my question is simply this: is there a way to hide a bunch of IR LEDs (for a night-camera) so they don't get such a red glow at night? Preferably not at the expense of the light output, but that would be ok too, if that's the only way. Background (not-tl;did-r): I setup a baby cam for my son when he was born with a Raspberry Pi and a Pi cam with no IR filter so I could watch him at night. It has worked flawlessly and frankly I couldn't imagine parenting without it. He's now 1.5 years old, and sometimes wakes in the middle of the night and sits there talking to his stuffed animals. Just a few days ago he seemed to notice the IR light for the first time, and now he'll be fine until his eye catches the light and he seems to get terrified and start crying once he sees it. At least, I assume it's what's scaring him (there's also an LED on the Raspberry Pi itself). The light is more powerful than it needs to be, I believe it's a 48-LED cluster, so if a solution involves dimming the light, it would probably be fine. I was thinking about putting it in the opposite corner of the room, but keeping the camera above him, but I think the slats from the crib would block much of the light and it would be very shadow'y. And I'm not so sure that would keep him from seeing it. I was thinking maybe some sort of plastic (acrylic) plate or filter. I'm not sure. Thanks Edit, here's a pic of the setup. This is an older pic of an older IR light that I got from Ebay. Half the LEDs burnt out within about 3 or 4 months. It's since been replaced by another cheap Ebay one, but same basic setup. I'm definitely open to moving things around so it's not so... "in yo face!" <Q> If you use 940nm IR LEDs instead of the shorter wavelength ones, most people will not be able to see the red glow. <S> Our eyes are slightly sensitive to the shorter wavelength IR resulting in the red glow. <S> Most IR sensitive cameras don't have a problem with the 940nm LEDs although some are slightly less sensitive at that wavelength. <A> The answer about shifting the wavelength of the IR away from 850nm is good, however check if your camera has an IR cut filter fitted as 940nm may not work. <S> You could try PWM the LEDs above 300Hzwhere <S> the human eye can't see them flicker which would reduce the overall perceived brightness. <A> Actually, polarizers are good filter for this case. <S> Buy a sheet from amazon ~$12, cut it and stack that 90 deg. <S> It blocks visible light (red glow) and allow near infrared to pass. <S> Remember not to block the camera if your led is right next to it. <S> In my case it works really well though you still see a tiny bit in pitch black room. <S> My baby doesn’t stare at it anymore anyway.
| Acrylic will just diffuse the light and soften it, won't really reduce it much.
|
Need for Temperature Compensation of Current Mirror I am currently learning about current mirror configurations. I have made two of them so far. Both of them worked as desired but, when heated or cooled, the current through the right side (the side where the output is taken from) decreased or increased significantly with small temperature differences. simulate this circuit – Schematic created using CircuitLab \$R_{load}\$ for both circuits was low or shorted to +10V. Both circuits were set to mirror the current of 500 uA. All transistors were hand-matched (they are all very close to each other as far as beta is concerned). Without emitter degeneration both circuits were significantly affected by temperature, especially Fig. A, where the current through \$R_{load1}\$ changed by 100 uA or more (1 second of heating) as I touched either of Q1 or Q2 with a finger tip; but as the transistors Q4 and Q5 were touched with a finger tip, the current through \$R_{load2}\$ changed by 50 uA (1 second of heating also), which is less then in the first example but still too much. With emitter degeneration both circuits greatly improved their temperature stability. For example (the \$R_e\$ added were 1 kOhm) if I refer to Fig. B, the current through \$R_{load2}\$ changed only by 10 uA (when heated by approx. 1 second), while the result with Fig. A was a bit worse. Both circuits are improved as emitter degeneration is added to Q1/Q2 or Q3/Q4. In both examples, current through Q1 or Q3 was approximately constant at all times but the current through Q2 or Q5 wasn't even close to that. I there any way to compensate either of circuits shown here, due to varying temperature? I thought that Q5 was going to correct the temperature variation error in current but obviously didn't. <Q> The three main steps are a) <S> Use as much emitter degeneration <S> as you can b) Match the temperatures of Q1 and Q2 c) <S> Match the dissipation of Q1 and Q2 <S> For (b), at the very least, glue Q1 and Q2 together. <S> Far better is to use a monolithic transistor array like the CA3046, which constains 5 transistors made on the same substrate. <S> For a really hardcore thermally matched pair, the LM394 'SuperMatch' pair uses thousands of transistor die connected like a chess board. <S> Q5 not only increases the output impedance, but also controls the dissipation in Q4. <S> Play with series drops on Q5 base or emitter to equalise the Q3/4 dissipation match. <S> A slightly more complicated solution with less bandwidth but much more precision is to do away with Q1, and use an op-amp to drive Q2 to equalise the voltage drops on Re1/2. <S> Replacing Q2 with a FET eliminates any beta variation contribution to the output accuracy well. <S> Then you only need to be concerned about amplifier Vos drift with temperature, and tempco or Re1/2 resistors. <A> This also smoothes out some of the other error sources (like Early voltage). <S> Your second schematic doesn't exactly achieve this, as the Vce of one transistor is higher than the other. <S> Here we go: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> This is a full Wilson mirror and Q3's role is to drop one Vbe to make Q1/Q2's Vce equal. <S> A cheap source of dual matched BJTs is DMMT3904 and other dual transistors. <S> They are not monolithic, so the matching and temperature tracking isn't as good as the fancy ones, but they're cheap. <S> If you want ultimate precision though, you would have to use a low-offset opamp. <A> To achieve matched current sources, use transistor arrays such as the (original)RCA CA3046. <S> Its now sold by Harris or Intersil. <S> Matching is to 5milliVolts emitter-base, which is about 10%. <S> For better than that, given you have no way to use multiple emitter stripes and inter-digitate them, you'll need emitter degeneration resistors.
| If you want to keep both transistors at the same temperature, they should have the same dissipation (ie, same current and same voltage).
|
Sixty INDIVIDUALLY on/off switchable COB LEDs connected in parallel: how to drive? For an atmospheric custom ambient lighting project, consisting of 60 individually on/off switchable COB LED's, each rated 2 Watts, 12~14V, max 200mA, all connected in PARALLEL , i'm looking to solve two questions. First I'll sketch the situation. Often times only 1 (out of 60!) Led will be switched on. Other times between 3 to 6 leds will be switched on. Rarely will there be more than 18 led's switched on and never all 60 at once. Q1. As a single LED driver, will a 35W, 3A driver suffice for my project? Q2. Will that tiny resistor on the cob pcb protect each led from high currents of the driver? Or will I have to manually solder resistors in series with each LED cob? If so, what Ohms / Power value am I looking for? <Q> Since 12V supplies are quite cheap this shouldn't be a problem. <S> Note that normally "led drivers" are constant current, this is not what you need here, since the current will depend on the number of LEDs which are ON. <S> However manufacturers have discovered that when people who want to use "12V LEDs" (which already include a current limiting resistor and need a constant voltage supply) will only buy it if the label says "LED driver". <S> Confusion is the result. <S> If it has "LED driver" written on the label, now you got to check if it's constant voltage <S> (it will say "12V 3A") or constant current (it will say "3A 8V-18V", indicating the output voltage will adjust inside the range mentioned to keep the current constant). <S> If you're really lucky it will say what it actually is on the label... <S> You can use SMD dual FETs in SO-8 packages for small size. <S> There are also power shift registers . <S> If you want to use PWM at high frequency to avoid blinking, make sure you can clock the shift registers fast enough. <A> Q1. <S> As a single LED driver, will a 35W, 3A driver suffice for my project? <S> Maybe RdsOn should be < 0.1 Ohm for low V drop. <S> of 20mV which reduces brightess by 6mA out of 200mA. <S> If I assume COB is designed the way it is rated... <S> 200mA @ <S> 12~14VRs=3.3 <S> Ohms and RdsOn if = <S> 0.1 <S> Ohm is 3% drop or 20mV . <S> Your mileage may vary with RdsOn and thus will affect conduction losses and exact output current. <S> Hopefully negligible. <S> Q2. <S> Will that tiny resistor on the cob pcb protect each led from high currents of the driver? <S> YES <S> If the tiny R is 3.3 Ohms the current and brightness will be controlled by 12~14V, by the difference between V+ and low current Vf of COB <S> I= <S> deltaV <S> /Rs <S> It will be dim at 11.34V and thus using 12.0V, 0.66/3.3R= 200mA <S> but if COB has high ESR, then more voltage may be needed thus loose spec ~14V in order to get 200mA. <S> This means process control of Vf may be good or <S> not so 12~14V range is your choice to get max rated brightness or not. <S> COB should get warm <S> 50'C but not burning hot. <S> Of course you need a supply > 30A such as a PC PSU <S> 500W <S> more or less depending on rating for 12V <A> I would use a 50W supply as 18*2=36W. <S> But to be on the safe side, I would recommand 120W just in case all the 60 leds lit on together by mistake, while you are away and come back three days later. <S> You never know. <S> Depends on you budget. <S> Choose an adjustable power supply and play with it. <S> Constant voltage. <S> R = <S> (Vsupply - Vled) <S> / I , <S> so R = <S> (12-3)/0.2, R = 45ohms. <S> If I is 0.15 R = 60ohms. <S> I would recomand 60 ohms. <S> Check if the resistance are between these two values. <S> Note that there are two resistors, and, as I imagine, two parallel circuits on the COB. <S> Each should be rated 1W. <S> They may be strong enough to sustain 1W, yet barely enough. <S> If the these resistors are much less than 45 ohms, then you'll have to add resistors for each COB. <S> But I don't think so. <S> You can also increase the existing resistance by adding resistors to make the leds more stable and long lasting albeit less powerful. <S> Say, from 45 to 60ohms. <A> Q1. <S> As a single LED driver, will a 35W, 3A driver suffice for my project? <S> It would suffice. <S> A 12v, 60 watt, 5A (25 LEDs) would do better. <S> 3 <S> Amp will works well for 15 of these LEDs. <S> If the power supply is insufficient, likely the only drawback will be the LEDs will not be as bright. <S> 200 lumens is a very bright LED. <S> In most cases the human eye would not perceive the dimming associated with insufficient power. <S> My concern would be the amount of time an LED will be on continuously. <S> A 2W LED will get very hot. <S> You may need a heatsink. <S> CoBs are usually designed to be mounted directly to a heatsink. <S> Heatsink USA 2.425 <S> " wide @ <S> 50¢ per inch <S> Q2. <S> Will that tiny resistor on the cob pcb protect each led from high currents of the driver? <S> Yes, if powered with a 12v power supply. <S> If you want anything other than a 200 lumen LED, do not buy these LEDs, buy some that work for your project. <S> These LEDs are for lighting, not to be viewed directly.
| As the others said you will need a constant voltage power supply of 12-14V (your choice) capable of outputting enough current, which depends on how many LEDs you want to light at the same time. Now you will need individual switches to control your LEDs, considering the number, the easiest would be MOSFETs driven by switch registers or i2c IO expanders.
|
PNP output low voltage is not 0V First, I'm a new bee to electronics. With the above circuit, why the output is 0.6V instead of 0V? I wanna achieve input HIGH at base and output HIGH, input LOW with output LOW. but the LOW voltage is not 0V. Where is the problem and how I edit it? <Q> For a PNP emitter follower, like your circuit, the emitter will always be about 0.6 - 0.7 volts more positive than the base. <A> This is because the transistor needs a voltage of about 0.6V between the base and the emitter, VBE, in order to turn on or be in the active region. <S> What you have is completely normal and expected. <S> The transistor needs to consume some energy in order to turn on. <S> (I'm not an expert on level shifters) <A> Also, "LOW" doesn't means 0V, <S> the devices which you are willing to use will have their own specifications on "Below what voltage to consider something LOW" and "Above what voltage to consider something HIGH". <S> Usual used logics now days consider more than 3.3 or 3.7V as high and less than 1.5 or 1.7 as low, but these levels varies with various devices and logic families they belong to. <S> You can find specifications in datasheet, but as far as I've played, 0.6V is enough low to be taken as "LOW", by most of things...
| To solve this problem you could use a level shifter, which is usually another transistor (maybe NPN) which will consume the 0.6V at the output and having its own output at 0V. As others have suggested, what you have is normal and expected.
|
How is it possible to have high voltage that is safe? I am aware that current is very dangerous and only 0.2 amps is enough to stop a heart. However I always see that high voltage is dangerous. Tasers produce a high voltage but since there is low current it is considered safe. How is it possible? According to Ohm's law, Current is equal to voltage divided by resistance (I=E/R). So if you are being tased by 10,000 volts and your resistance is only 1000 ohms, wouldn't there be 10 amps flowing through you and killing you? (10,000/1000= 10) <Q> Safety standards are different for a taser from an ordinary electrical appliance. <S> The whole point of a taser is to have an adverse effect on the human body, and a small fraction of people who get tased do die from it. <S> This risk is considered acceptable (by some people), since the alternative is for the taser user to use a gun or nightstick instead, either of which has risk of death. <S> However, if you are designing a kitchen appliance or a television set, if it has the same effect on its user as a taser, that would be a gross failure, and an unacceptable risk. <A> Let's look at other things which work this same way. <S> A metal-halide light is a type of arc-discharge light. <S> Like most arc-discharge lights, it is practically a dead short once the arc is struck. <S> So why doesn't a metal-halide light basically explode once it ignites? <S> Because it is fed from a current-limiting power supply. <S> But HID ballasts do exist which are electronic, and do the same thing with semiconductors. <S> These are similar to LED driver modules, except with additional features to strike the arc and warm up the bulb. <S> Similarly, taser control modules hit the victim with enough voltage to strike the arc, then limit current to "correct" values . <A> wouldn't there be 10 amps ? <S> Maybe for the 1st few nanoseconds, but No <S> Your question is too vague to answer all conditions to make it safe. <S> Make what safe?A taker? <S> Most of the Electro muscular currents bypass the heart from external dielectric mass. <S> So they might use 10k more energy to start a heart in Emergency than in open heart surgery. <S> The source impedance limits the current to desired levels, while high initial voltage ionized the contact to lower contact impedance. <S> Safety of insulation depends on medium (3kV/mm for clean air) and electrode or bushing geometry which has 5:1 effect from smooth donut to sharp needle for E field gradient stress. <S> Distribution capable of 10kA <S> *600V is capable of human arc flash <S> is many orders of magnitude more unsafe than 100kV @ <S> 10mA. <S> But a transformer substation with 200kV basic impulse limit (BIL200) protection will protect yet fail with 60kV at line f on a 40kV grid due to insulation strength raised by source <S> rise time from ionization delays. <S> All line powered products Must be factory tested to safety Hipot leakage tests in each country in the range of 3kV with <100uA expected except for line filters up to 250 uA per power supply.
| Most HID lights use a magnetic wound-transformer ballast, which is rigged to limit current.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.