source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
Can a PCB footprint be extracted from a solid model? I usually just make a PCB footprint if I need it (Solidworks PCB, Altium, Protel99, OrCAD, etc). I'm an electrical engineer. I've seen very nice 3D solid models for some parts and I wonder if the PCB footprint could be extracted from the 3D model? I know very little to nothing about 3D models. I've seen the word "step" in lots of them and I think Solidworks can open them but besides that I'm clueless. Could someone enlighten me pls? Thanks. <Q> Not very effectively. <S> A 3D model will, at best, tell you where the part has pins. <S> A footprint needs to specify the size and shape of the pads and soldermask for those pins, which may be significantly different from the shape of the pins themselves. <S> Determining how different it needs to be is often a matter of trial and error to determine what has the best yield in production. <S> (And hand-solderable footprints are another matter entirely.) <S> If it's something entirely novel, though, you're best off looking to the manufacturer for recommendations. <A> Software could, but this would be the worst way to do it. <S> You could find the area around the pins and then generate pads around those areas, you may even be able to get Altium to script it for you. <S> The problem is footprints are also sized to include things like pin tolerances (not all packages are created equal of the same part, the pins are in slightly different positions and this needs to be accounted for) and solder wicking/tensioning (did you know that pins are sized slightly larger than needed in some cases to provide extra solder paste). <S> For many packages standards have been developed (by IPC for one), this means that many libraries already have the footprints for many of the packages available. <S> It is also increasingly unnecessary to even bother with datasheets and footprints at all. <S> For example, Altium has a footprint generator that covers (to my estimation 90%) of the parts out there. <S> The latest version of altium also has a manfuacturer search that can many pull footprints and schematic parts directly into a design so a designer doesn't even need to bother with foot prints. <S> Octopart also has footprints available. <S> Yes, you'll run into the odd part that doesn't have a footprint available <S> and you'll need to pull it from the datasheet. <S> You'll want to pull the mechanical information of the footprint from the datasheet because the manufacturer has probably tested the tolerances of the footprint with the part which will result in the best outcomes for your PCB layout. <A> Sure, assuming the model had precise dimensions and wasn't just an artists impression. <S> You could extract the dimensions, use some algorithm to try and figure out where the base is, determine if it's SMD or through hole. <S> Then some set of rules could be used to generate pads, etc. <S> Though without some sort of standard the models have to follow, it's going to make mistakes. <S> I'm not really sure what advantage there would be to doing this though? <S> I think maybe you're looking at it backwards. <S> It's very easy to generate a 3D model from a PCB layout, then your 3D components can be placed onto their foot-prints and you get a complete 3D view of your board. <S> Going back the other way is more difficult, mostly because it's not something that is standardised. <A> Tools like Eagle need both a schematic symbol, the mechanical footprint symbol, and then tie them together as a library part so that when you are creating the schematic the footprint for the part shows up on the board for routing. <S> Having just the footprint doesn't help much, and you are stuck faking the routing and the connections don't show up on the schematic.
If the part uses a more-or-less standard footprint, it's probably feasible to extrapolate a footprint from the model based on footprints for similarly shaped parts.
Sinusoidal wave output from a GPIO I think the question is not particularly MCU/board specified but I will be trying it on STM32F4-DISC board. I was thinking of how to output a nice sinewave from a normal GPIO output pin, but I was not able to find a proper methodology. I am asking for a method of thinking, a guide rather than the complete solution. Just give me an overview of the process. If possible of course. Thanks in advance. <Q> If the you then filter it (sometimes with nothing more than some series resistance and capacitance to ground) you can end up with a reasonable approximation to a sine wave for many purposes. <S> The greater the difference in frequency between the PWM repetition rate, and the desired sine wave frequency, the better the result you can achieve. <S> With either a DAC or PWM, you probably want to implement a Direct Digital Synthesizer to produce the amplitude values, at least if you need to vary the frequency or it is not a nice fraction of an available clock. <S> The DDS algorithm is described in this existing EESE answer (describing an audio application, but the technique is general) <S> Most practical table based synthesizers use a fixed playback sample rate, and a fractional phase increment and accumulator register. <S> Essentially, calculate the phase increment per sample period for your desired output frequency, and pre-multiply by a large power of two, <S> say 1024 or even higher - with an ARM MCU you might as well <S> just multiply it <S> by 2^16. <S> Each cycle add this phase increment to an accumulator register. <S> The accumulator will be wider (have more precision) than the address <S> input into your wave lookup table, so simply ignore the lower bits and use only as many upper bits as your lookup table has address bits. <S> So you might be calculating time with 32-bit accuracy, but only using the upper 16 bits to look up samples in a 65536 element table. <S> The result is that while the index time of a given sample is approximate, the cumulative time has many bits of accuracy. <S> This easily gets you sub-Hz resolution, without the need to alter a timer or DAC clock at all. <S> And that's important, because typically the cleanup circuitry in a DAC and following its output is designed for only a small number of sample rate(s). <S> Note that if your lookup table contains a sine or other waveform with symmetry, you can probably shrink its size - for a sine <S> you <S> really only need to store a quarter of a wave, as you can get the other three quadrants by inverting phase or amplitude. <A> You would create one cycle of appropriate analog values in a buffer use the values in the buffer to adjust the duty cycle on a PWM output on a GPIO, stepping through the buffer filter the PWM signal with an RC circuit Of course, "nice" can mean a lot of things. <S> For the highest quality, use a fast PWM carrier freq, and make sure your buffer makes use of the full resolution of the PWM duty cycle control. <S> STM32F4's tend to have Digital to Analog converters, which can do the job directly. <S> You might find http://www.ti.com/lit/an/spraa88a/spraa88a.pdf of use <A> Only pins capable for that on STM32F4 would be the DAC output pins.
Only dedicated DAC outputs could do this directly but for many purposes you could use a timer output pin in compare mode to generate pulse width modulation (PWM) and vary that in a sinusoidal fashion.
Programming STM32 Black Pill with ST-LINK/V2 dongle So I recently bought the STM32F103C8T6 "black pill" dev board along with the ST-LINK/V2 dongle (more probably a Chinese clone). After much struggling I figured out that I need to hold down the dev board's reset button for the dongle to detect the MCU but then when I connected the dongle to my STM32F429 Discovery board, it detected the F429 without having to hold down reset. I know the connection process has a "Connect with reset" option where you connect a reset pin to the board and the dongle does the hard reset for you, but I monitored the pin and it doesn't do the reset (probably a Chinese flaw?). It's not the end of the world, it's just weird that the 103 needs to be in reset to connect, but the 429 doesn't. The 103's SWD pins aren't assigned other functions so that's not the issue. Any insight would be greatly appreciated. Why do I need to have the 103 in reset and not the 429? <Q> You can verify this while watching the pin with a storage scope triggered on it. <S> As they say, "you get what you pay for" <S> As a result, they won't work in a situation where you need to actually assert the target's actual reset line automatically. <S> Substitute an actual ST-LINK or use a Discovery or Nucleo board recent enough to be able to drive the reset (the early ones could not do that either) <A> It shouldn't <S> but it may have to do with the Boot1 and Boot0 <S> settings, and you may be in a weird mode. <S> Try the JP1 jumpers and make sure that they are in the same configuration as the F429 Boot0 should be low and Boot1 high. <S> Make sure the clone matches the datasheet, and there is nothing else connected to the SWD pins (which there could be on the clone) https://wiki.stm32duino.com/images/5/52/Black_Pill_Schematic.pdf <A> One reason that pressing reset can be needed, is when the application disables the SWD pins, to use them as GPIOs. <S> While reset is pressed, SWD is always enabled, so you can always connect with SWD while reset is pressed. <S> I have found that the Arduino_STM32 core by "rogerclarkmelbourne" in fact always disables the SWD pins, see this commit . <S> This would mean that if any sketch compiled with that core is on your board, SWD won't work without reset. <S> I've seen the same happen on a RobotDyn black pill board, which seems to be shipped with the STM32duino bootloader and a "hello, world" sketch. <S> The bootloader does not interfere with SWD (so you can attach the debugger in the second or so that the bootloader runs), but the sketch that follows disables SWD (is probably compiled with the Arduino_STM32 core). <S> For completeness, this does not hold for the Arduino_Core_STM32 <S> core by ST <S> , sketches compiled with that core leave <S> SWD enabled normally (not sure if you can actually use the SWD pins as GPIO with that core, then).
A key driver of your problem is likely that the overwhelming majority of the compact little unofficial "ST-LINK" dongles do not actually drive their labeled reset pin, as the pin is connected to a different GPIO than wherever whatever firmware they are runnings thinks it is.
Multiple switching LED drivers on single PCB I am working on a high power LED board project. It requires to have 100 LEDs, of different types and colors. There will need to be 17 individual strands (need to be able to dim them separate.) Max current of any strand will be around 1A. It is planned on powering this from a single 24VDC supply. This can be done using a 2Oz copper board, the back will have a full aluminum heat sink, and active cooling is allowed. Board is pretty large, around 1' x 1'. My first thought was to definitely use switching LED drivers, to avoid having to burn off excess power as heat. But the idea of using 17 separate switchers seems like a EMC nightmare that may not have a easy solution. I thought about using a TPS92512 driver, which allows me to drive all of their clocks together from an external crystal to help with EMC. But 17 switchers on 1 board still doesn't seem feasible to me. So, should I not even consider using switchers, and just figure out the thermals with linear drivers instead? Or is there any other option I may be missing? Edit: Did some quick math. If they were all driven linearly we would have to dissipate around 150 watts total.. Since the most a single linear driver can dissapate is around 5 watts, that approach isn't an option. Only other thing I can think of is to use a FET to drive each strand, and use several high power resistors per strand to help spread out the heat across the board. Edit 2: Some needed information. LED strands can not be combined (number of LEDs in each strand, TOTAL voltage drop across strand, voltage needed to dissipate assuming 24VDC supply, Current req'd) 7 LEDs, 20.2V, 3.8V, 1.2A 7 LEDs, 20.2V, 3.8V, 1.2A 2 LEDs, 5.8V, 18.2V, 1.2A 7 LEDs, 20.2V, 3.8V, 1.2A 7 LEDs, 20.2V, 3.8V, 1.2A 2 LEDs, 5.8V, 18.2V, 1.2A 4 LEDs, 13.6V, 10.4V, 0.3A 8 LEDs, 22.4V, 1.6V, 1.8A 4 LEDs, 12.4V, 11.6V, 0.8A 10 LEDs, 21V, 3V, 1.2A 10 LEDs, 21V, 3V, 1.2A 4 LEDs, 8.4V, 15.6V, 1.2A 8 LEDs, 16.8V, 7.2V, 0.76A 4 LEDs, 8.4V, 15.6V, 0.76A 4 LEDs, 8.4V, 15.6V, 0.76A 8 LEDs, 14.8V, 9.2V, 1.62A 4 LEDs, 12.8V 11.2V, 0.78A Another thought I had: We can drive strands 1, 2, 4, 5, 7, 8, 9, 10, 11, 13, and 17 directly using the 24VDC input, using a FET and resistors to limit the current. The rest of the strands we can use one switching driver each like I originally thought. We would then only need 6 switchers, and would dissipate only around 60W from the strands directly driven via the 24VDC supply. <Q> 17 different switchers might sound like a lot, but it is "just" 24 dB more emissions than having only one. <S> Of course you need to design them well, but it's still not as difficult task as you might think. <S> Without doing more math, to me the EMC trouble seems easier to solve than the heat trouble you'll have if using linear drivers. <A> Using DCDC buck converters is definitely the way to go. <S> You can find inexpensive types in SOT-23-6. <S> This one might be just the ticket: <S> https://www.diodes.com/assets/Datasheets/AP63200-AP63201-AP63203-AP63205.pdf <S> - it uses spread-spectrum to reduce peak EMI and controls drive to reduce ringing at the inductor. <S> EMI Tips: Use shielded inductors for the DCDCs. <S> Consider a common-mode filter between the regulators and the LED array. <S> You could do something as simple as split the power supplies onto a separate board and run its cables to the LED array through a common ferrite core. <A> The 17 switchers design should be fine. <S> This is purely for driving LEDs, so you don't have any ultra sensitive components to worry overly about. <S> Besides just following good design and layout practices there are two simple things that come to mind for me: <S> Firstly you can run them on a couple of different clocks. <S> That would spread the noise out instead of having one big spike. <S> Two or three should be enough I think? <S> Secondly paying close attention to your returns. <S> Route them properly avoiding sharing copper as much as possible where appropriate. <S> Basically don't just via them to a plane blindly because it's convenient. <S> I think this is often the area people screw up, I've certainly done so. <S> As you said it's a large board, so there is plenty of space to lay things out well. <A> You have 24V DC coming in, but you can efficiently convert that to other voltages using stepdown switchers. <S> For instance you have a number of strands at 5.8V and 8.4V - for these <S> you could step down to about 9.5V or so and them use simple linear constant current drivers per strand - or even have two step down regs at say 9.5 and 6.8V or so. <S> This approach will dramatically reduce both emissions and heat dissipation in the linear sources, with minimal complexity. <S> Obviously there are tradeoffs for the number of step-down regulators and power dissipation. <S> I leave it to you to explore the permutations to arrive at an acceptable solution.
If you use the separate-wires approach, you can use the low-side of the LED string for current sensing for your LEDs. Well designed switcher is easily 30 dB from the EMC limits, so there shouldn't really be a problem.
Fender Telecaster guitar makes loud humming noise, stops when player touches strings/metal parts I have a Fender Telecaster that hums loudly--much more so than other similar guitars. It's a standard Tele with single-coil pickups. The humming noise stops when the player touches the strings or other metal parts of the guitar. The guitar is plugged into a simple solid-state guitar amplifier. The humming noise sounds like 120Hz hum (see update below). I've tried plugging into a different outlet, turning off all the other electrical devices in the room including the lights, etc. Nothing helps. I would like to understand: What is causing the hum? I know it's "mains current" or something like that. I would like to understand what is actually happening. Why does this guitar hum much louder than a Fender Stratocaster, which also has single-coil pickups? Why does touching the strings cause the humming to stop? The strings and other metal parts of the guitar are all connected to the jack and cable sleeve and all comprise the "ground" of the guitar-amplifier circuit. The cable sleeve is in turn connected to the metal amplifier chassis and ultimately to the mains ground. I'm posting this question here because whenever I search for information about this on the web, I find all sorts of answers/explanations from people who don't know much about electronics that all contradict each other. Some explanations I've heard: "Ground loop." Ok, where's the loop? It's just a guitar plugged into an amplifier. This "explanation" is usually followed by advice to "break the loop," try removing/re-installing wires, use a ground-lifting cable or device, or something like that. How can I diagnose a ground loop like an engineer, maybe with a multimeter? "Loose wire." The person providing this answer recommends checking solder connections etc. In the same thread, people have pointed to the fact that touching the strings eliminates the hum as both evidence of there being a wire loose ("your body completes the circuit!") and of there definitely not being a wire loose ("your body is being grounded through the guitar"). "Not enough shielding." Maybe? But why does touching the strings cause the humming to stop (after all the electronics are still unshielded right?) and couldn't we just do whatever touching the strings does, electrically, and thereby stop the humming? "Your body is an antenna/capacitor plate." This explanation suggests the there is some potential being generated in the player's body that is being transferred to the pickup and that touching the strings grounds the player. This explanation seems promising but is always presented in a hand-wavy manner. Okay, so my body is an antenna, but why does that cause the guitar to hum, and why don't I cause other electronic devices to hum as I move around the room? "Everyone knows Telecasters hum, just get used to it." I'm having trouble accepting that Fender would continue to produce a guitar that hums like mad when they obviously have the technology to mitigate the problem, as evidenced by the behavior of the Stratocaster in the exact same situation, in the same place, plugged into the same amp. I understand that single-coil pickups hum, but the Stratocaster hums like, well, every other Strat, while the Telecaster hum is obnoxiously loud. The question was answered on Music: Practice & Theory Stack Exchange, but the answer there is typical of what I've found online. The answer is "grounding and shielding" and advises checking wires, changing components, etc. Someone brings up the "your body is an antenna" explanation in the comments. There's no explanation of what is actually going on. I'm hoping that by posting this question in EESE, I can get a more satisfactory/scientific answer than the ones I've found so far. Update to this question, 21-Sep-2019: I was able to do more investigation of this issue. I checked the grounding and confirmed that there is continuity all the way from the strings to the cable sleeve to the amp chassis and to the ground in the wall power socket. So whatever the problem is, it's not a missing or floating ground. Also, I took the guitar to a different location, with a different amplifier, and in that location, with that amp, the hum was greatly reduced and was more like typical single-coil pickup hum. I checked ground continuity in the new location, and it was fine. So it's some issue that is at least partially environmental (having to do with either the amplifier or the place) but only affects this guitar, or at least affects it more than it does other single-coil guitars like the Strat. Another update: Actually the noise isn't a 60Hz hum; it sounds like the 120Hz "angry insect" hum that is often associated with ground loops. But I can't identify a ground loop here. The guitar is just plugged into one amplifier, which is plugged into one wall outlet. <Q> Normally severe hum means your strings and pickup coil are not grounded. <S> There MUST be a short ground wire in the guitar that connects the body of the pickup coil and string clamp to signal ground. <S> Normally the outer part of the 1/4" phono plug at the guitar is signal ground. <S> If this wire is missing or has come loose it must be repaired. <S> Any color of stranded small gauge wire will do. <S> I have seen and fixed this problem enough to say it is common with certain types of guitars and old hand-me-down guitars. <S> Take off the electronics cover plate and make sure this ground wire is present and is securely soldered at both ends. <S> Also try other amplifier cables and wiggle the 1/4" plugs at the amp and guitar. <S> If this creates a lot of noise and hum consider new cables, but check <S> and/or fix the guitar ground first. <S> EDIT: <S> Based on the diagram you provided the white wire from the Jack <S> is signal ground, but the bridge plate part#21 should be connected to this white wire to ground the strings. <S> Connecting the coils correctly does NOT ground the strings. <S> Use a short piece of stranded wire to ground the bridge plate. <S> It would also be prudent to check this white wire from the Jack to make sure it has solid connections at both ends. <S> If it comes loose the guitar will have no signal ground! <S> NOTE: <S> Based on the diagrams there is no green ground wire from the pickup coil metal case and string bracket to the white signal ground. <S> Add this wire and the hum should go away. <S> You should check for proper ground polarity at the amplifier. <S> Some amps do have a ground polarity switch or ground phase control. <S> Admittedly these controls are on expensive Peavy and other amps. <S> Seriously, your idea to try another amp points to your amp as being defective. <S> Suggest you replace it with a better model if a service tech cannot find an obvious problem. <A> The noise is catched by not so heavily shielded circuits. <S> The amp input doesn't load it to its knees, because normally amp inputs are Hi Z to keep out the treble loss due the inductance of the mics. <S> You become a part of the shielding when you touch the strings. <S> They are internally connected to the signal ground. <S> Test it. <S> Try to replace yourself with a big piece of metal foil (as electrically). <S> ADD due the comments: <S> It's possible that the strings just in your Telecaster are NOT connected to the signal ground, so it differs from my Telecaster. <S> There's a wire from the signal GND to the bridge and that way to the strings, too. <S> It's shown also in available articles of Telecaster wiring. <S> But the strings are still connected to the other metal parts of the guitar except the signal circuit and you can be grounded via some other route, for ex you have leather shoes and you stand on concrete floor. <S> Then you bring the needed grounding to the metal parts and the noise level drops radically BTW. <S> I assume the placement nor the position of the guitar need not to be changed, it's the touching only that's needed for the difference. <S> Guitar placement dependent hum is picked by the pickup coils from surrounding magnetic fields - mains transformers in the equipment spread it. <S> High impedance signal lines catch capacitively the electric field of the surrounding mains cables and lights. <S> That is helped radically if the circuits are inside or even in the near proximity of a grounded shield. <A> On reading your question, the replies and comments I do not see where you have definitively identified the source of the hum you are hearing. <S> There are two major ways the hum gets in to these circuits: <S> conducted and radiated. <S> The source of conducted hum in electric guitars is often the amplifier. <S> The source of radiated hum in electric guitars is something in environment is generating a fairly powerful magnetic field. <S> Often mains overhead power lines. <S> To test for a radiated source use a battery powered portable amp like a Pignose Legendary 7-100. <S> Using an amp without any connection to mains will isolate the guitar from conducted hum. <S> If you still hear hum go somewhere well away from any power lines. <S> In your case conducted hum could be easier to fix. <S> Next is failed capacitors in the pickup circuit. <S> There are two possible circuits used in the Telecaster see these articles for good descriptions: Factory Telecaster Wirings, Pt. 1 <S> Factory Telecaster Wirings, Pt. <S> 2 <S> Note that there is usually a ground wire that connects the body of the tone and volume pots. <S> Not every Telecaster seems to have this connection. <S> The least likely source of hum are bad pickups. <S> These are all but impossible to repair but if you have disassembled your guitar this far you should check the solder connections of the hook up wire to the magnet wire on the pickups. <S> At this point is easier to just replace the pickups <S> but then this is also a tricky process. <S> This can involve putting shims under the bridge or neck pickups.
The usual cause is poor quality soldering of the components in the guitar.
What is the simplest instruction set that has a C++/C compiler to write an emulator for? I'm looking into writing a little software emulator that emulates/runs instructions. The easiest would be to invent my own instruction set, but I thought it would be more fun if I write an emulator for an instruction set that already has a C++/C compiler. What is the easiest instruction set/architecture that has a (hopefully stable) C++ and/or C compiler? By easiest, I mean the least number of instructions. <Q> Easiest would be to invent my own instruction set uh, ok, we might come from very different experiences here… <S> With easiest I mean the least amount of instructions. <S> That's not necessarily the easiest to implement. <S> Often, having more instructions is a good complexity tradeoff compared to having more complex instructions. <S> So my question is, what is the easiest instruction set/architecture that has a (hopefully stable) C++ <S> and/or C compiler? <S> This sounds like no job for C++, so let's concentrate on C. <S> (If you don't understand the difference having C++ RAII paradigm makes, you might not be in the optimum position to design your own ISA.) <S> Puh, some microcontroller instruction set that is early, but not too early (because too early would imply "designed around the limitations of digital logic of that time, like e.g. 8051). <S> AVR might be a good choice, though I personally don't like that too much. <S> I hear Zilog Z80 is easy to implement <S> (there's really several Z80 implementations out there), but it's pretty ancient, and not very comfortable (being from the mid-70s). <S> If you really just want a small core to control what your system is doing, why not pick one of the many processor core designs that are out there? <S> For example, RISC-V is a (fairly complex) instruction set architecture, with mature compilers, and many open source implementations. <S> For a minimal FPGA core, picoRV32 would probably the core of choice. <S> And on a computer, you'd just run QEMU. <A> You should take al look at the PIC microcontroller family. <S> The instruction set is limited to 35 different instructions, while the controller is actually still used. <S> Look at the datasheet at page 228: PIC16F datasheet <S> The controller is using 8 bits and is also available with less periphery, but that does not change anything for the instruction set. <A> With a judicious choice for the single instruction and given infinite resources, an OISC is capable of being a universal computer in the same manner as traditional computers that have multiple instructions. <S> OISCs have been recommended as aids in teaching computer architecture and have been used as computational models in structural computing research. <S> Whether a compiler exists, I do not know. <S> But I suspect some unlucky student somewhere has probably been assigned the task of writing one. <A> Donald Knuth's MMIX architecture has a 64-bit RISC instruction set with 256 opcodes and existing C compilers (GCC, actively maintained) and emulators (mixvm, etc.). <A> I hope for something with like 50 instructions. <S> Also, 32 bit and c++ The "Beta" architecture used in MIT's 6.004 core track class is a 32-bit RISC design often referred to as a simplification of the DEC Alpha. <S> It's been implemented in many ways - personally in an FPGA - and at one time there was an old version of GCC for it, though that may at this point be challenging to dig up if no one is continuing to work with it. <S> One example of the architecture documentation is here, the full link will be retained as which year versions of the course are published online changes from time to time and it can be worth looking at several as different information may be included: https://ocw.mit.edu/courses/electrical-engineering-and-computer-science/6-004-computation-structures-spring-2009/labs/ <A> A simple instruction set of only 8 instructions used for teaching is known as the MU0 instruction set. <S> It originated at Manchester University and is used for teaching both compiler writing and hardware design. <S> There are several online documents describing it, including class notes at Manchester University. <S> Bibliography: <S> http://digitalbyte.weebly.com/processor-design/the-mu0-processor-instructions ( archived link ) <S> http://www.cs.man.ac.uk/~pjj/cs1001/arch/node1.html ( archived link ) <A> My recommendation would be the LC-3 ("Little Computer 3"), which was specifically designed for ECE students to be able to implement a basic CPU in hardware. <S> It's significantly cleaner and easier to emulate than any "real-world" architecture, such as x86's absolute mess of instructions. <S> A C compiler is available for it, though without floating-point support (since the LC-3 doesn't have a FPU). <S> If you want something that's actively used in the real world, try MIPS-I . <S> MIPS is still widely used on embedded systems, and is best-known for being used in the Nintendo 64 and the Playstation. <S> The standard emulator for it is SPIM . <S> (And of course, as other answers have mentioned, Knuth's MMIX was made famous by The Art of Computer Programming , though unlike the others, to the best of my knowledge, it's never had a true hardware implementation.) <A> This is not a completely serious answer but it might suit your case, if you want to keep the instructions to implement for your emulator low. <S> In fact the x86 mov instruction is turing complete. <S> And there even is a C compiler for it.
You need a One Instruction Set Computer (OISC) A one instruction set computer (OISC), sometimes called an ultimate reduced instruction set computer (URISC), is an abstract machine that uses only one instruction – obviating the need for a machine language opcode.
Heatsink on underside of PCB I have prototyped some DC-DC converter modules, and they work well. However due to size, I don't have a huge amount of copper pour for them to dissipate their heat. I have used 2oz copper but the bottom layer copper pours get really hot at higher current. Even through the solder mask! I experimented with adding a heatsink to this bottom side of the PCB with some thermal grease and the results were surprisingly good. The heatsinks get hot under heavy load. The buck-boost-inverter went from maxing out at about 1.5A to being stable at 2.5A! This is my current setup: However I can't help but think that I can improve this. I am thinking of removing the bottom solder mask around the heatsink area for better heat transfer. Also I want to use a Sil-pad instead of thermal grease for easier assembly, and because I don't want to risk shorting different copper pours when the solder mask is gone. Like this: So my questions are: Is this a good way to do this? (given my limitations) is there anything that could affect the long term life of my PCBs with this setup? Are there any other suggestions people have? Thanks! <Q> So, if you want to avoid the risk of shorting copper pours, stick with the solder mask. <S> It's probably lower in thermal resistance than the difference between a pad and a thin layer of heat conducting paste. <S> Also, how hot does the top side of your IC package get? <S> Maybe a small, second heatsink stuck to the top of the package is helpful, too. <S> Also, if you mount your board upside down, convection will greatly improve the cooling efficiency of your fins. <A> I think it’s a fine idea. <S> About 2/3 of an IC’s power gets shed in the PCB. <S> The reason you don’t see this approach more often is that many designers want to use the underside of the board for bypass capacitors, and they in general have a mechanical bias for tall components on the top side. <S> The challenge you will have with fully exposed copper flood is in assembly: you won’t be able to use simple wave soldering without a blocking plate if you need that, and its also an issue for components that are applied by SMT. <S> Maybe split the difference and use a stipple pattern of exposed copper in the mask, and ensure any traces that go from the exposed area to component pads have soldemask dams. <S> More stuff: <S> Phase change material will give better performance than the silicon pad, at increased cost. <S> Thermal compound also. <S> Using epoxy - <S> that’s the lazy way, assembly people hate it. <S> Work out how to mount the heatsink using push-pins or a spring clip. <A> This link contains useful information, also this video . <S> Typical solder mask has 20-25µm thickness and 0.2 W/m. <S> K thermal conductivity. <S> This means a 1cm2 area of solder mask will have a thermal resistance of 1°C/W. <S> This can be a problem... or not, that depends on your application and how much power is dissipated. <S> For a few watts, an extra 1°C/W doesn't matter, just do the calculation. <S> For a larger contact area, thermal resistance drops accordingly. <S> However, soldermask has another very important role. <S> If you use immersion gold, large copper areas without soldermask may result in a thick gold layer, and your PCB fab will be asking who's gonna pay for the extra gold. <S> If you use HASL, solder thickness may not be even, which will require a thicker interface material to even out the bumps, and increase thermal resistance too. <S> And of course, wave soldering would result in a mess too. <S> So... soldermask is nice to have. <S> Anodized aluminium is insulated by the oxide layer, but it can get scratched off. <S> So, a bare heat sink on top of vias with just conductive grease between them would work... in theory... still a bad idea. <S> It's better to tent the vias and protect them with soldermask. <S> Thermal grease is better than silpads because it is thinner. <S> However, silpads are insulating and thermal grease is not. <S> Why not simply check the datasheet of your silpad and calculate the thermal resistance versus contact area, and check if it works? <S> Another option is a SMD heat sink . <S> Pros: thermal conduction path is 100% metal. <S> Cons: <S> thermal path has to go horizontally through the copper layer, which isn't that efficient. <S> Anyway. <S> If your IC only dissipates a few watts, keep the silkscreen or use a SMD heat sink.
There could even be a little drop of solder left over on the edge of a via, and then your heat sink won't be flush, and if you try to remove the bump by hand, it'll make a mess.
Reverse polarity protection for battery powered application Im designing battery powered application and need reverse polarity protection. My system specs: 4.5v(3x AA batteries) in and system running on 3.3v. Im using MCP1711T-33I/OT LDO to convert 4.5v to 3.3v. My system will take about 50 milliamps maximum. Therefore i can't obviously use any diodes since voltage drop and power waste will be too big. I read some threads and people tend to suggest P-channel MOSFET. I was looking for low rds(on) MOSFETS and all their packages seem so huge. Like this one: http://www.farnell.com/datasheets/2049687.pdf Size isn't too big problem though(sot23 package would be perfect though). Can you suggest me anything else than P-channel MOSFET? If not, can you suggest me some decent P-channel MOSFET for my needs? Thanks in advance. <Q> With RDS of say 1 ohm you would have 50mV drop. <S> This drop reduces the amount of energy you can drain from the battery a little, before the voltage drops too low, but otherwise it doesn't effect your system. <S> Power loss in the RDS doesn't matter because you use a linear regulator so total power loss doesn't change. <A> Even ~500mOhm <S> RDSon is plenty low for this. <S> You are searching for too low an RDSon for your application. <S> I use the IRLML6402 (65mOhm) or IRLML6302 (600mOhm) because that's what I like. <S> It makes little difference at 50mA. <S> You'll notice a 30mV drop as much as you will notice a 3mV drop. <A> Im using MCP1711T-33I/OT LDO to convert 4.5v to 3.3v. <S> ... <S> Therefore i can't obviously use any diodes since voltage drop and power waste will be too big. <S> If you are really concerned about voltage drop and power waste, I think you'd better focus on the MCP1711. <S> It is dissipating the difference between input voltage and output voltage <S> so, 1.2 V * 50  <S> mA = <S> 60 mW at most. <S> Any R DS(ON) up to 1 Ω waists at most <S> (50 mA) 2 * 1 Ω = 2.5 mW. <A> I have used Si2305 which works great. <S> Check it out. <S> It dropped <0.2v @150mA
I'd suggest to replace the MCP1711 by a high efficient step-down converter (next to using the P-mosfets suggested by others for polarity protection). So I'd say you can easily find good enough MOSFET even in SOT23 package.
Is it intuitively correct to think of voltage as a “propellant” force? From my understanding voltage is an electromagnetic force that creates current by exerting a force on an atom that causes that atom to transfer one of its valence electrons to the neighboring atom. The atom that lost a valence electron becomes a positively charged ion and then “steals” a valence electron from the succeeding atom to restore charge balance. This effect propagates throughout the circuit. Voltage and current are directly related. the larger the voltage/emf, the faster one atom’s valence electron jumps to the next = larger current/electron propagation Assuming this is in essence all accurate: say we have a simple series circuit with a 5V supply and 2 equal resistors. The voltage drops throughout the circuit from 5 -> 0, but as we know current is the same across a series circuit So intuitively, Speaking about series circuits only, is it valid to think of voltage as a “propellant” force or only as a force that “gets the electrons moving” initially and depending on the amount of force, sets the speed at which they “move.” As an example of this, if we have a person who slingshots a rock in space (space so that the velocity of the rock is constant), voltage would be the force exerted on the rock initially that determines how fast the rock moves through space but after the rock left the slingshots pouch, that force would have no effect on the rock after that point. Or is voltage more of a “driving force”? An example of what I mean, if we have a person who’s pushing a boulder on a flat plane, voltage would be the force that’s exerted on the boulder. If the person stopped pushing, the boulder/electrons would stop moving/propagating. I ask this because voltage drops across the circuit From 5v to 0v but current remains the same, so it seems as if voltage’s only responsibility is to start and set the speed of the current in the beginning of the circuit, but after that current acts independently of that voltage. <Q> It would be incorrect to think of voltage as a propellant, propellants accelerate and most of the time diffuse. <S> Propellants are gasses most of the time. <S> Voltage sources are like pumps. <S> Current is like flow. <S> Resistors are like restrictions. <S> That is the best physical analogy of a circuit. <A> Driving force is more accurate, there's no real inertia to the elections so to keep them moving there always has to be something down the line that's pushing. <S> Like you said there's the valence electrons shifting which drives the electrons at the end of the circuit to all keep moving onward. <S> The more voltage potential, the bigger the obstacles the little electrons can overcome!! <S> Regarding the relationship between current and voltage, it sounds like you have a chicken-or-the-egg issue with your understanding. <S> Current is a physical property where a circuit will have a defined number of electrons flowing through it per second (Amps = Coulombs/second). <S> Based on the constant current flow through the circuit, each node between resistances will end up at a certain voltage. <S> For example, if you have two different batteries and attach them to an arbitrary circuit, the higher voltage battery will push harder and drive more electrons through the circuit. <S> Using the current as a starting point, we can then calculate the voltage drop over each resistor and determine the node voltages. <S> Circuit analysis starts with looking at the circuit from the perspective of the voltage source, and then working out from there. <S> In the circuit below the 12V source has no idea how many resistors there are <S> and it doesn't care, all it sees is that it has to push eletrons through 12 \$\Omega\$ of stuff to get the electrons to go through. <S> Using Ohm's Law the current will be \$I=\frac{V}{R}\$ <S> so I = <S> 12/12 <S> = 1A. <S> Since we know there will be 1A flowing through each resistor we can find that for each resistor a 1A current will cause a 4V drop across each one because of Ohm's Law: <S> \$V=I*R\$ <S> so V = 1 <S> *4 <S> = 4V. <S> Intuitively this makes sense because each resistor is the same so the voltage will be evenly distributed through the circuit, using those same steps you can analyze any resistor circuit as long as you know how to calculate equivalent resistance for parallel and series combinations of resistors. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> Electromotive Force May be the term you seek https://en.m.wikipedia.org/wiki/Electromotive_force
It is correct to think of voltage as a pressure and electrons as a fluid.
Electrical Standards - Earth Wire at Low Voltages? I got a question about safety standards: I'm designing a consumer device and I'm not sure whether I need an earth wire. I have an AC-DC converter plug that outputs 5 V and up-to a maximum of 1 A current. This plug goes from NEMA 5-15P to a barrel plug. The main device is made out of (exposed) metal. At this level of voltage/current do safety standards require that I ground the enclosure to earth ground? <Q> One of those parameters is isolation between primary (AC) and secondary (DC). <S> If the secondary is properly isolated, and has acceptable leakage that is below the limits allowed in the applicable standards, then it need not be earth connected. <S> This is a complicated topic and <S> I suggest you consult a compliance engineering specialist to be certain. <A> If you bring AC mains into your product, then yes, you need a ground. <S> If hot or neutral touched the chassis, then it would need to exit through the ground. <S> By the sound of it you are using an AC-DC converter, so this would minimize testing. <S> A good way of doing this is by using an external AC-DC converter. <S> We did this on a product and avoided many of the saftey tests during ETL regulatory testing. <S> My advice to you would be to get a regulatory consultant, they will save you money in the end. <S> Another note, if you are going to get your product certified it could cost anywhere from 2k-8k USD depending on the testing you need (another reason to get a consultant). <A>
The AC-DC section needs to meet applicable safety standards. If that 5v power supply with a barrel plug that itself has exposed metal meets electrical safety standards, then your metal-cased appliance will also be safe.
16 A appliance into 30 A outlet I am hoping someone with electrical system knowledge can help clear something up for me. I am considering purchasing an appliance the requires a 230V, 16A outlet. I asked an electrician about wiring on the same Male plug as my dryer which is 220V 30A so I could simply use the same outlet and he said I couldn't do so because I would need a specific 220V 15 amp outlet. But from looking online and reading about amps, it seems that as long as I'm not EXCEEDING the amps, then it shouldn't matter. I am wondering if using this outlet will cause any risk of fire or electrocution because the amperage is higher? Also if it could damage the appliance since it's quite expensive. Any input is really appreciated because I know nothing about this type of thing. Thank you EDIT: I am in Canada. The product is Speidel product#22200(pome fruit grinder). Note that in the manual it says its 50 Hz but they now have a version for North America that is 60. I couldn't upload a picture of my outlet but will try again. <Q> The problem is that the outlet is capable of delivering 30A without tripping. <S> The wiring in your 16A device may not be equipped to handle a 30A surge, and could therefore sustain damage because the electrical service can deliver more amperage in a failure mode. <A> UPDATED AGAIN <S> That is a serious machine. <S> It is not clear from the available information what type of motor it is, but it is possibly a 3 HP motor. <S> It may work OK on a 20A breaker, but it is perfectly reasonable to put it on a 30A breaker. <S> Your existing outlet will work fine. <S> That means it is a NEMA 14-30 type receptacle. <S> So all you need to do is go to the hardware store, <S> tell them you need a NEMA 14-30 plug (or just find it on the shelf yourself). <S> On your cord coming out of the grinder, one of the three wires will be Ground (probably green insulation or green with yellow stripe). <S> Make sure you connect ground correctly to the ground prong on the plug. <S> The other two wires will probably be some European colors. <S> Maybe blue and brown. <S> These are both "hot"and should connect to the two hot prongs on the plug. <S> Order doesn't matter. <S> The neutral prong on the plug will not be connected to any wire since you don't need neutral, and the grinder doesn't have a neutral wire. <S> This will work fine as long as you do it like I said. <S> It is just the tinyest bit "bush league" but it is better than replacing the receptacle for your dryer. <S> If you really want, you can replace the dryer receptacle with a NEMA 6-30 receptacle and put a NEMA 6-30 plug on your grinder. <S> But then you won't be able to plug in your dryer anymore. <S> Image credit: <S> Orion Lawlor license: CC BY-SA 3.0 <A> Practically you cant run a 15A appliance from a 16A outlet. <S> The outlet and mains wire will heat up over time due to heavy current flow. <S> Its best to provide some headroom. <A> I am not familiar with CSA code, but I have some familiarity with NEC. <S> You should first verify that the appliance has a CSA label. <S> If the appliance does not have internal motor overload protection, code probably specifies a maximum branch circuit breaker rating. <S> The circuit breaker for the branch circuit can provide all three protections under the circumstances spelled out by the electrical code. <S> One of those conditions may be that the CSA listing was obtained with that provision. <S> If that is the case, it should be spelled out in the manual and may require some kind of marking on the product. <S> Note also that the lack of a cord may mean that the product is intended to be direct connected. <S> If that is the case, you may be required to have it installed by a licensed electrician. <S> If the electrician that you spoke to said that you need a 15 amp outlet he is mistaken or did not have all of the required information. <S> To get this right, you probably need to have an electrician look at everything, not just give an opinion based on your description.
Motorized products need three types of protection, short circuit protection, overload protection and ground fault protection. Drawing 15Amps of current from 30Amps outlet is totally safe. You need to have an electrician look at the manual for North America and markings on the product and preferably also the actual product. Or worse, if your power cable isn't up to the task of managing the heat of a sustained surge, it could cause a fire hazard. This is what I would do myself. The circuit can probably be legitimately converted to a 20 A circuit by replacing the circuit breaker and receptacle. You mention that your dryer outlet has 4 prongs (please let me know if that is wrong). The Canadian code is similar to NEC, but not identical.
Strange LED behavior: Why is there a voltage over the LED with only one wire connected to it? I am encountering a strange problem when I measure the voltage across an LED. Please see below picture: As you can see, I only connected 1 wire ("-") of adapter and used a multimeter to measure the voltage drop across the LED and I found there is ~-2V on the LED! There is no loop in this circuit, so it should have no volts drop across the LED. I have used other multimeters, but I still measure that negative voltage so it's not a multimeter problem. I'm really sure it's an LED problem, but I have never seen this behavior before. I'm also not familiar with the manufacturing of LEDs, so I don't know what's happening on this LED. This LED correctly lights with a forward voltage and does not light with reverse voltage. However, the important issue is when I use this LED as a test fixture, it causes the reference voltage (GND) to shift so the output voltage is different. My question is: Have you seen this behavior on an LED? What is the possible problem on this simple LED? <Q> A LED is basically a photodiode. <S> If you shine light with the corresponding wavelength onto the LED, it will generate a voltage over the pn junction. <S> Try to cover the LED with one hand and check if the voltage output stays the same. <A> There is no problem with the LED, this is normal behavior. <S> LEDs produce a voltage when struck by incident light, much like a photodiode. <S> For reference I just pulled out a 638nm (red) 3mm LED and measured it with my Fluke 189. <S> It showed 0.3V. <S> Moving the LED to underneath a spotlight <S> and it showed 1.7V. Different LEDs may produce different voltages with the same amount of incident light. <S> Also, a multimeter with higher impedance will allow the LED to build up a higher voltage. <A> It works both ways As a supplemental answer to this and this excellent answer, the reverse process is also possible. <S> Direct bandgap photodoiodes used in photovoltaic mode (it's the photovoltaic effect you are seeing here) can also luminesce or glow with recombination light when excess e-h pairs are produced. <S> This can be done with an applied electrical current or even an ion beam, but as explained in the excellent answer to Do III-V based photovoltaics “glow” (photo-luminesce) when illuminated but not loaded? <S> the recombination light can be induced by a photocurrent within the junction, which itself is produced by incident sunlight.
The stronger the light, the higher the voltage. A red LED has a bandgap of ~2V, this is propably what you are seeing here.
Pass USB 3.0 connection through D-SUB connector I need to pass a USB cable through a vacuum chamber wall, for which we have only D-SUB passthrough flanges available. So I cut a USB cable in half and soldered a D-SUB connector to each half. For USB 2.0 connections, this works without any issues, but I've had troubles getting a USB 3.0 connection to work. Specifically, the computer emits the connect/disconnect sound repeatedly every few seconds when the cable is plugged in. The only workaround is to push the connector in slowly, until the device is recognised, essentially forcing a USB 2.0 connection. I assume this is due to insufficient shielding to get a USB 3.0 link? The individual connections seem to be fine, with <3Ω resistance for each one and no shorts. Below is a diagram of how I routed the cables through the connector: As shown in the figure, the shield is connected to the shell of the connector to connect the shield on both sides together. I tried to keep the amount of destroyed shielding low, with around 3cm on either side removed. What is the most likely cause for this failure, and how to avoid it in the future, if possible? <Q> I assume this is due to insufficient shielding to get a USB 3.0 link? <S> It's more likely that you simply introduce an impedance break so significant by separating the conductor pairs of the superspeed conductors that communication can't properly take place. <S> What is the most likely cause for this failure, and how to avoid it in the future, if possible? <S> You probably won't be able to solve this situation using your current D-SUB connectors at all. <S> You'll need to replace these connectors with ones that at least approximately retain the nominal 90Ω impedance of USB3 SS connector pairs. <S> It's pretty likely the easiest way to achieve that is through USB3 connectors themselves. <A> There are two problems here: 1) <S> A USB cable is hot swappable, meaning the power pins are engaged first (they are longer than the data pins). <S> A d-sub connector is not built for hot plugging, the pins are all the same length and some engage first depending on the angle that the D-sub is plugged in and is potentially causing problems <S> 2) <S> The impedance of the differential lines (as previously mentioned) needs to be 90Ω, otherwise the fast differential signals will reflect and attenuate if not properly matched. <S> The connector also needs to be impedance controlled to pass fast signals through it. <S> The USB 3.0 spec uses low-frequency signaling (10-50 MHz) to initiate a link with the other side. <S> SFP+ transceivers usually don’t cover this range, at least not in their datasheets ( <S> it’s more like 300-2500 MHz or so). <S> So this vital signal may not reach the other side properly, and hence the link establishment may fail. <S> Source: <S> http://billauer.co.il/blog/2015/12/usb-superspeed-parallel/ <S> The problem with your d-sub is it probably has a capacitance\inductance simmilar to the ones shown below, and not fast enough to pass the fast 2.5GHz signaling of USB <S> 3.0 Source: https://www.farnell.com/datasheets/66098.pdf <S> So what can you do about it? <S> If you don't have to have USB 3.0, you may want to try using only the GND, Vcc, D+ and D- lines. <S> If hot plugging of the D-sub can be avoided that might be best (plug in the connector that goes to the hub). <S> They also make UHV compatible USB 3.0 port feedthroughs if you want to drill another hole in your plate or find another way to pipe it in. <A> The problem with DB-9 connector is that it is not "impedance controlled" and has no shield between signal pairs (you need to use differential pairs through the connector and shield them from other diff pairs). <S> USB 3.0 operates at 2.5 GHz signal rate, and "3 cm loose wire" is a kill for it. <S> Impedance mismatch creates multitude of signal reflections (causing so-called "inter-symbol interference"), and significant cross-talk between Rx and Tx pairs will kill signal coherency causing massive link drops. <S> USB 3.x specifications have very strngent requirements for impedance and near- and far-end crosstalks over the cable. <S> To have USB 3.x connection with your internal camera, you need either to do a heavy search for vacuum-grade USB connectors (if they exists, they might), or use RF-grade 50-Ω coaxial multi-pin feed-through connectors, space-grade. <S> There are twin-axial Sub-D size connectors, you need at least two twinaxial channels, similar to this one: <S> You also could use connectors used for ethernet connectivity, if you can find them in vacuum grade version, something like this <S> In worst case you can use four feed-through SMA-type connectors, and make a USB-to-SMA adapter, similar to what USB-IF uses in interconnect testing and cable certification, something like this: Some sources for vacuum-grade feed-through solutions are Pave Technology , MDC Vacuum Products , and likely many others. <S> In any case I see no chance to have a reliable USB SuperSpeed channel using DB-9 connectors, and you will need a serious rework on your chamber.
As said, you can't just separate the conductors of USB3 arbitrarily: the signal is carried as electromagnetic field between the conductors; because the signal frequencies of USB3 are solidly within the microwave range, your splitting of conductor pair essentially means you break the transport of energy.
Is it possible to confine radio frequency signal? I would like to build a confined enclosure(like a 50cm cube box) to limit and perform RFID scanning only within the box, not interfered by the RFID tags laying outside the box. My intention is to be able to do a quick scan(stock take) on everything that is placed in the box, thus, the scanning distance of the RFID tags should cover the entire interior volumetric space of the box while the RFID reader will be placed in the box. What is the material that I should use for the interior/exterior wall of the box? Any experience or advise in constructing such a box is also greatly appreciated too. <Q> The easiest way to do this is with a conductive box with no holes (also known as apertures). <S> You need a box that has a skin depth larger than the RFID signals. <S> If using low frequency RFID which runs at 120–150 kHz, for aluminum or copper, you'll need at least 1mm of material for adequate blocking. <S> If the RFID uses one of the faster bands, skin depth becomes less of a concern as 0.1mm would provide sufficient blocking above 1GHz. <A> The simple answer is to try an experiment. <S> You will need a cheap metal walled box, so steel should be a good choice. <S> See whether it reads the tag. <S> If so, move it outside the box and try again. <A> I suggest the tags be 5cm away from the walls. <S> You must experiment to learn what is reliable.
While the thickness of steel is probably not too important at high frequencies this may be more problematic with low frequency RFID of tens or hundreds of kHz. Ground the box, but the reader inside along with an RFID tag. Keep the tags and the reader away from the Faraday Cage walls, and you should have good results.
Reflect IR beam off reflector instead of emitting straight to TSOP receiver I have a working set-up to detect beam breaks using an IR LED, modulated at 36kHz, and a TSOP receiver connected to a comparator. It outputs the count via Arduino. The range is about the width of a door, say two meters maximum. The transmitter and receiver are on a separate breadboards. This has its disadvantages, such as two sets of batteries. Is there a way to reflect the IR LED off a reflector and having the TSOP receiver on the same breadboard as the LED? Perhaps behind it? The German term ' lichtschranke ' seems to be what I vaguely have in mind, but I don't speak German and can't find it in English. Does anyone have suggestions on how to modify my current set-up or can anyone point me to some schematics? I am guessing something will need to be done in order to maintain a straight or angled beam and not have it reflected back all over the place. How can I mount an IR emitter and receiver on the same circuit board to detect a reflection instead of a straight beam? <Q> You need a retroreflector , which is a special type of reflector that sends light back towards its source (unlike a simple mirror). <S> This is what you'd put on the rear of your bicycle to reflect the headlights of incoming drivers back at them <S> so they see you at night. <S> You can get them for cheap in any bicycle store or supermarket. <S> Even better, shown below is one that is designed to be screwed at the back of a trailer. <S> If you want to mount it with a screw, that would be a nice choice, as it already has a hole. <S> Next, put your IR LED and TSOP receiver close to each other, both aiming at the reflector, and perhaps a black plastic or cardboard separation between the LED and TSOP... <S> Adjust LED power down to make sure the signal is detected with the reflector, but not with IR light bouncing off the people you want to detect, and you're all set. <A> A corner reflector has the advantage that it doesn't need to be aligned perfectly, it will reflect the light back where it came from. <S> The retroreflector that peufeu referenced is composed of many tiny corner reflectors. <S> These should work with near infrared, but it would be safest to find one that specifically says it is good for near IR. <S> You can also use polished aluminum as the reflector. <S> For a recent project I used 3" x 3" x 1/16" aluminum. <S> I sanded it smooth with ultra-fine sandpaper. <S> Then, I polished it with car polishing compound until I could see my reflection clearly. <S> To minimize stray light on the detector, you can put a small tube over the sensor. <A> You can mount your transmitter and receiver next to each other, but you'll need to block the direct light path. <S> Here are two examples of pre-build products. <S> I would model a design after the first example; I'm not sure the isolation of the second example is sufficient. <S> And, yes, you can use a reflector, but make sure it is appropriate to your IR wavelength. <S> For example, a standard bathroom mirror won't work well. <S> These mirrors are often aluminum under glass. <S> The aluminum reflects well, but the glass absorbs IR. <S> Some plastics absorb IR, whereas others reflect it. <S> You may need to test different options. <S> The best IR reflectors are often metal (copper, aluminum, silver, etc). <S> Here is a chart showing the reflectivity of different metals at different wavelengths. <S> It looks like copper, silver, or gold will outperform aluminum at typical near-IR frequencies. <S> ( source ) <A> Look for line or proximity IR sensors. <S> Some even detect distance. <S> Many are compatible with bread boards or can be wired to them with 0.1" jumper wire or headers. <S> They have the detector and sensor built into the same module as the one shown below. <S> Source: https://www.digikey.com/product-detail/en/sparkfun-electronics/ROB-09453/1568-1272-ND/5762422?WT.srch=1&gclid=EAIaIQobChMImKyvqPLI4wIVhcpkCh2mpwiNEAQYBSABEgJe5fD_BwE <A> Yes, it is very possible. <S> You just need to arrange the LED and detector appropriately. <S> IR is just light that your eyes can't see. <S> If you were trying to do this with a visible LED, what could you do? <S> For example: You could put the LED inside an opaque box with a hole at one end, you could make it more of a "pinpoint" type of light. <S> You could put the LED and detector next to each other with a wall between them. <A> Putting a reflector on the other side is the obvious and trivial answer. <S> But it won't work completely reliably as a beam break detector because of reflections from the very objects which are breaking the beam. <S> For exactly this reason a distance measuring is <S> SPAD LiDar module is a better bet. <S> Since your objective is a doorway of less than 2 mtrs, this is perfect. <S> Simply do fast, continous distance measurements. <S> If the result shows less than (say) 4ft, then count it as a beam break. <S> $5-10 range. <S> I'll post the part number as soon as I remember it. <S> Ok... <S> VL53L0X <S> is one such.
Yes, you can put the IR LED and IR detector next to each other with a reflector on the other side. There some excellent cheap modules available now...
Microvolt to milivolt/volt amplifier with common components Suppose one has access to common components and gear only : basic milivolt multimeter, badly regulated supply, linear regulators, 1% resistors, common BJT/FET, voltage references like LM431 or zener diodes, and not-too-bad op-amps with external null offset trimming like LM725, LM301, or even the old 741. Is it possible to build a reasonably simple general-purpose pre-amplifier, in order to measure µ-volt — or tens of µ-volt — DC or near-DC signals, with such components ? What precision can be reasonably attained ? Is a simple non-inverting configuration in two stages (e.g. 50x + 50x) good enough, or does one should use a more (or less) sophisticated design ? Can/should discrete components be used for the input stage ? What precautions should be taken in practice, in addition to proper shielding ? Are there references on the subject ? <Q> The ancient LM725A was actually a pretty decent DC amplifier for relatively low (say < 100 \$\Omega\$ ) source resistance (such as thermocouples), null it with a good cermet pot and you'll get < 1uV/ <S> °C maximum drift with temperature. <S> Of course you can get a "zero drift" op-amp these days for a very small sum of your favorite fiat currency, with drift in the +/-50nV/ <S> °C range, so that would be preferable in many cases, especially if the supply is to be relatively low voltage. <S> The "zero drift" amplifiers have a bit of weirdness in that much larger transient periodic currents come out of the inputs compared to even a typical bipolar op-amp <S> but the average is much less than that of a bipolar amplifier such as the LM725A. Sometimes that matters greatly, often it doesn't. <S> There is plenty of information out there on the circuit scheme that @analogsystemsrf mentions- chopping followed by AC-coupled amplification, followed by synchronous demodulation. <S> It's been used since the early days of electronics, using vacuum tubes, and probably before that. <S> For example the IR detection system that R.V. Jones developed ca. <S> 1938 (top secret at the time, as it was intended to help shoot down German aircraft) which used mechanical chopping. <S> You can find many references in the literature, and if you restrict your search to dates prior to about 1980 there won't be any modern parts involved. <S> Also look at " lock-in amplifiers ". <S> There is a method that probably dates back to about the 1940s- using a 400Hz AC powered mechanical chopper, which can then be used to drive a center-tapped step-up transformer primary and/or AC amplifier. <S> Here is a photo of a mechanical chopper from an eBay listing : <A> PhilBrick Nexus did this with balanced-varactor-diode-bridges for the input chopping. <S> Today, you use analog MOSFET muxes. <A> I tried a simple LM725-based non-inverting 100x amplifier with precision resistors and offset-null trimmer, powered with a clean 2x15V supply, and reading out the result with a basic 3-1/2 digit multimeter on the 100mV range. <S> It was not shielded but very close to the measured device. <S> It proved to be stable enough (in my non-controlled environment) to measure voltages with µV-precision for tens of seconds, which was quite enough for me. <S> But this is really the limit of what can be done with such a simple setup.
If you can put the amplifier in a typical well-controlled office or lab environment, away from air currents and heat-dissipating components, chances are your drift will be limited by other factors such as thermal EMF and EMI, somewhere in the few-uV range. If you also have analog MOSFET multiplexors, then you simply chop the signal, amplify 1,000X and then use another analog MUX, synchronized with the first, to demodulate.
How do i remove this power supply noise? Above is the circuit diagram of my power supply. it converts input 12V DC into 5V DC and then 3.3V DC with linear regulator. i have highlighted it in red color in the block diagram. So current situation is, whatever noise is there on 12V input voltage shows up on 5V and 3.3V. how do i filter this ? how do i improve my design to get clean 5V and 3.3V even with a noisy 12V input. If i use clean power supply 12v, i am getting best results.But i cant give 12V clean power supply to everyone. Customers can pick up any noisy power supply and it should work with my design. below i am attaching two waveforms , Yellow is 12V , Blue is 5V and Pink is 3.3V. this is zoomed out view a zoomed in view If you look at the waveforms, the noise appearing on 12V is carried on to the 3.3V. i was assuming LDO will at least attenuate it more but, it is very high.on the PCB the ground is properly stitched with multiple vias from top to bottom. how do i get rid of this, what ill have to change or add in my design to improve it? Note: Input 12V Dc power supply used is a 220V AC to 12V DC adapter. <Q> Good job with posting all your 'scope shots. <S> More useful would be a frequency spectrum capture. <S> That helps identify source of noise more easily. <S> In the meantime, noise pickup happens most easily through inductive pickup. <S> If your PS wires are long(ish) and loosely strung, they will pick up more noise. <S> Try running PS wires close together, and using twisted pair where relevant. <S> Keep lines short as possible. <S> Throwing capacitors at it doesn't work - as you have seen. <S> In fact, some of your caps might be increasing the ringing in conjunction with that L ! <A> It seems to happen at harmonics of the line rate. <S> This is showing up as common-mode noise on everything, and because it's AC potential difference between your nominally-grounded body and its local reference <S> it's bound to cause issues with your sensor when your body is nearby. <S> Where is it coming from? <S> Leakage (parasitic coupling) between the 12v DC-DC primary to the isolated secondary. <S> All line-power supplies have it, good ones manage it better. <S> As an experiment, try grounding the board to safety (earth) ground to see if the noise goes away. <S> If it does, congratulations, you have leakage. <S> Another experiment: add a common-mode filter to the line-in on the 12V supply. <S> This would break the AC loop for noise and contain it in the power supply where it belongs. <A> i would recommend you go to FFT mode on your scope to pin point the frequency(frequencies) of the noise. <S> or randomly you can try using ferrite beads at the output of the buck converter and connect a capacitor of few microfarads to several microfarads(depends on the frequency) from +5V to ground after the ferrite bead. <S> hope this also helps Ferrite bead position
Add a common-mode filter to the 12V or get a better supply.
MOSFET is overheating when running on a 20A load I'm trying to build a controllable switch for a heavy load (3D printer), controlled by a microcomputer. I don't want to use relay because of slowness, sound and durability. So I've built a switch based on MOSFET IRLB3034 datasheet . And here is my circuit diagram: The main issue is, the transistor getting more than 150 °C after 20 sec of working at full power. Vgs = 4.5v or even 11v (that makes no difference) of constant DC Vds = 1v and rising Current is about 15A (max of power supply is 20A) What could be my possible mistake? <Q> Assuming you've actually measured Vgs at the MOSFET pins (it gets only about 90% of the drive voltage, and the drive voltage is heavily loaded by the LED), then one would tend to conclude that the MOSFET is not actually a genuine part of the type indicated. <S> The certainty would increase to near 100% if the MOSFET was sourced on a platform that hosts bad actors. <S> The dissipation should be no more than about 0.7W which will get hot without a heatsink, but not excessively so ( <S> 150°C is excessive). <S> If your traces or wires going to the MOSFET are thin you may also be getting heating from those sources. <S> Tja is 62°C/W so the rise should be the order of 40-45°C. <S> Rds(on) increases with increasing junction temperature, maybe 50%. <S> But not 125°C rise worth. <A> the transistor getting more than 150 °C after 20 sec of working at full power If I understood correctly, this is being used to control a 3D printer heated bed. <S> Its not clear from your post what "full power" means, but if it is really on a 3D printer application, I suspect its not on all the time but rather in PWM due to PID control of the bed temperature. <S> Considering that the gate capacitance of that FET and that the gate resistor is relatively high (1k Ohm), it could be that a lot of the heat you are generating is from turning it on and off (High RdsOn periods). <S> If that is the case, you can try to lower the gate resistor a bit and/or find a part with lower gate capacitance. <A> As @Kripacharya said the figure 8 in datasheet shows that I'm operating beyond transistor's capabilities. <S> So that might be the main reason (as far I can't test with suitable MOSFET), so I need to find another one or another way to reach my goal. <S> P.S. <S> Could anyone suggest transistor searching tool for similar cases? <A> The mosfet you picked should be able to handle 15A with no heatsink without any issues. <S> The datasheet says that at Vgs=10V the Rdson should be 1.4mR. P = <S> I 2 <S> *R = <S> 15*15 <S> *0.0014 <S> = 0.315W. <S> At 20A that would be 0.56W <S> The datasheet also says that Junction-To-Ambient thermal resistance is 62degC/W. <S> So at 560mW that gives 35degC temperature rise over ambient. <S> Did you measure the Vds over the MOSFET? <S> At 15A and 1.4mR it should be only 21mV. If you see anything higher, the part is bogus. <S> btw in your schematics you are running the load current through a jumper (J5). <S> A standard jumper will have a resistance of 20-50mR. So at 15A that will dissipate at least 4.5W - that should be enough to melt the jumper. <A> I believe the consistent view is that your MOSFET is suspect. <S> If you were doing pwm, then maybe the large C on the input ((10.3nF) was causing problems. <S> But because you are testing at DC this excessive heat should not take place.
Try sourcing the same/ similar MOSFET from a more reliable source.
Relay control by using microcontroller I'm working on a I/O module to control AC, DC motor and lamps by using relays. But I don't know how to control the relays. Some says I should use optocoupler, some says transistor array. I found an optocoupler TLP280, in the figure there is the schematic. Can I use it by supplying 3.3V to control 24V dc motor and some lambs. <Q> The relay itself isolates the control signal terminal(operating voltage) and switch terminals(NO/NC) electrically. <S> Or you can avoid a transistor array by using solid state relays(SSR). <S> They have a wide operating voltage, longer life, no mechanical noise and they are optocoupled too. <S> You can directly drive SSR from microcontrollers. <S> When choosing SSRs, make sure to use the correct type for the load you are going to use (both AC and DC solid state relays are available). <A> Relay contacts create quite a bit of noise when they switch, particularly if the load has a lot of inductance (such as a motor, or even because of long wires that are not close to each other), so the opto-isolator can be a good idea, because it prevents the noise from being coupled back to the ground of the power supply used for your logic. <S> If you use a transistor array (and a catch diode across the relay coil) you will probably have no trouble at all driving the relay coil, but you may have issues when the loads are connected. <S> For this to be valuable, the relay supply should be isolated from the logic supply, say another 12V supply. <S> You will need a series resistor to control the LED current (your optocoupler has AC input capability, so one of the LEDs will be unused). <S> The CTR is as low as 50% depending on rank, so if you drive it with 5mA the output current might only be 2.5mA (allow perhaps 1mA to allow for temperature and aging effects) <S> so you would need some kind of additional driver for most relays. <S> Suppose you follow the optocoupler with a ULN2003 darlington array, then you can switch substantial relays, and the catch diodes are included. <A> Using an optocoupler will give you better protection for your <S> I/O pins, because they are electrically completly isolated from the power rail. <S> This might make sense for harsh environments or outputs where you do not know what the user will connect to them. <S> But typically when you design the whole system this won't be necessary and you can just use FETs to switch the relays. <A> If you're not sure about the output side of the relays, especially with motors connected to them, rather be safe and use the optocouplers. <S> The datasheet says the LED has a forward voltage of 1.15V when passing a forward current of 10mA <S> so a simple LED calculation should tell you the resistor value you need R = <S> (3.3-1.15)/0.01 = 215Ω <S> so 220Ω should work. <S> Depending on how much current your relay coils draw you may want to add another transistor or in between the opto and the relay because I see the opto's transistor side can only handle 50mA collector current. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> If your relay coil current is less than 50mA ignore the second transistor and connect the bottom of the coil to the collector of the Opto. <A> The usual, simple approach is to use one or more FETs. <S> Your micro-controller can source enough current to switch on a FET, which can handle the higher power needed to switch your relay. <S> The most simple configuration could look like this: simulate this circuit – <S> Schematic created using CircuitLab
You can simply use a transistor array to control the relays.
Adding small AC signal to a DC current using inductive coupling Is it possible to add a small AC current on to a DC current in a wire using inductive coupling from an external source. Or if there is another way, I want to add AC ripples to a DC current. simulate this circuit – Schematic created using CircuitLab <Q> First, there will be just one current in the circuit you show. <S> The current into the coil must be the same as the current out of the cell. <S> Use a transformer with the secondary winding connected as shown, in series with your dc source. <S> You can use a function generator or other sine wave source connected to the primary winding of the transformer. <S> You will need to select a transformer designed to work at the ripple frequency of interest. <S> If you want your ripple to be at the mains frequency this should be easy. <A> Yes you can do this. <S> The transformer must work at the frequencies of interest and you must keep the DC current through the secondary well below the rated peak current or core saturation will cause unwanted distortion of the ripple. <A> Any AC current will also flow though those circuit elements. <S> For example, if the DC current is being driven by a high quality current source, that source will adapt to cancel the effect of the transformer. <S> A more extreme example: take the case of \$I_{DC} = 0\$ <S> because it’s an open circuit. <S> You get no AC current. <S> But \$I_{DC} = 0\$ <S> because it’s <S> a short circuit (a loop of wire) would give you the full AC from the transformer <S> In many cases, you’ll get only partial cancellation: there will still be AC induced, but less than you would get if the transformer was just driving a short circuit.
You can do pretty much exactly what you have drawn. The answer is, it depends on what’s determining the DC current.
How do piezoeletric accelerometer sensors generate output voltage? How does piezoeletric accelerometer sensors work technically?I know that piezoeletric accelerometer sensors have a piezoeletric crystal, a mass and a spring. But where is the voltage at the ouput coming from? <Q> What makes a crystal a piezoelectric crystal is that it produces a voltage when deformed. <S> This is the defining characteristic of a piezo electric crystal. <S> Are you sure you're asking about accelerometers and not gyroscopes? <S> Gyros need a piezo because they need a resonator <S> but I don't know why an accelerometer would need a resonator. <S> Almost every accelerometer from Analog Devices has this paragraph under "Theory of Operation" which does not mention any piezo crystals. <S> From ADXL1005 datasheet (an accelerometer) <A> A piezoelectric material converts between stress/strain and voltage, that's the defining feature of it. <S> An accelerometer can be made with a piezoelectric spring, anchored to the case, with a test mass on the end. <S> When the case is accelerated, the test mass has to be accelerated to match, this acceleration force being provided through the spring. <S> The strain in spring produces the output voltage through piezo action. <S> The output from this type of accelerometer is very low. <S> Much more sensitive ones use capacitive sensing to measure the deflection of a silicon spring. <S> If you're concerned about how straining a piezo crystal produces a voltage ... <S> Consider that a crystal is built from repeating units that can have some assymetry of charge on them. <S> When crystals are bent, some types (piezo types) produce a net shift of charges within the structure. <A> Typical piezoelectric accelerometers do not use a resonator. <S> But there is research regarding accelerometers using differential resonators. <S> This is described in the paper <S> High Frequency FM MEMS Accelerometer Using Piezoresistive Resonators : <S> The accelerometer uses two differential resonators, connected to the accelerometer proofmass by an amplifying leverage mechanism. <S> The mass plates vibrate in-plane and in opposite directions (anti-phase) and they are electrostatically driven at resonance. <S> When the proof mass is subjected to an external acceleration in the sensitive axis (Figure 1a), the springs sustaining the resonator are axially loaded, changing their acceleration moves the proof mass, the springs of one resonator are under compression while the other resonator springs are under tension. <S> The tensile/compressive axial loading shifts the resonance frequency, the tensile force increasing it and the compressive force decreasing it [6]. <S> This shift is proportional to and a measure of the external acceleration. <S> The differential design scheme also allows to cancel device thermal mismatches and nonlinearities. <S> [...] <S> The output signal is measured piezoresistively, using the modulation of the DC bias current in the vibrating connection beam. <S> So in this acceleromter they do not use a piezo electric sensor, but a piezo resisitve one. <S> This is the prinziple from strain gauge strips . <S> The length of the piezoresistive material is changed and by that the resistance is changes. <S> If you apply an external voltage, this voltage is modulated with the resonance frequency, but there is no voltage generated inside the sensor. <S> I'm not sure if you really meant that, but that is the only accelerometer with resonators I know of.
The piezoresistive double-mass resonators are electrostatically driven in anti-phase and the output signal is measured piezoresistively by applying a bias current to the connecting microbeam of the double-mass resonators.
Memory capability and powers of 2 Why are computer memory capabilities often multiples of a power of two, such as 2^10 = 1024 bytes? I think it is something related to the binary logic, but I do not understand the precise reason for that. <Q> Memory addresses are binary numbers. <S> The range of an N-bit (unsigned) binary number is 0 to 2 N -1, a total of 2 N different values. <S> Since addresses are passed to memory chips as binary numbers, it makes sense to build them in capacities of powers of 2. <S> That way, none of the address space is wasted, and it's easy to combine multiple chips/modules to build larger memory systems with no gaps in the address space. <A> A 1024 x 1 memory chip requires 10 address lines and you get full utilisation of all addresses. <S> Now, if someone brought out a 600 x 1 memory chip, it would still need 10 address lines. <S> It can’t use 9 because that could only uniquely define 512 memory positions. <S> Then think of what would happen if someone wanted to use two of the 600 x 1 memory chips to give a combined memory size of 1200. <S> How would the address lines (plus 1 more) cope with numerically embracing each address slot uniquely and, if there is an MCU incrementing through memory in order to store contiguous data, that MCU would need special knowledge about the binary address numbers that are unused. <A> With 1 address wire you can access 2 different addresses. <S> With N address bits or wires, you can access 2^N different addresses. <S> Not much more complex than labeling 10 different items with single decimal digit or 26 different items with a single letter (depending on how many letters you have in your alphabet of course). <A> This is true for computers which use the binary system for number representation, and all modern computers do. <S> For example, how many grams are in 5 kg? <S> 5000, right? <S> Now imagine we define a kilogram as a unit of 679 grams. <S> How many grams are 5 kg now?
Using powers of the base means using round numbers, which makes the math a lot easier, and easier math means simpler implementation.
What happens when we connect battery to a circuit? Example setup: I have a piece if metal and I have 12v battery. When battery is connected to the metal current flows through the metal. So, my question is, does electrons from battery move through out the entire circuit?If yes, what is velocity of electrons. What I imagine when there is a current flowing through a circuit: I imagine a bunch of electrons leaving negative terminal of battery and these electrons travel through the circuit and reach the positive terminal of the battery and how fast the electrons travel decides the value of current. Is this correct? I'm confused. <Q> So, my question is, does electrons from battery move through out the entire circuit? <S> If yes, what is velocity of electrons. <S> Electron flow is actually pretty slow. <S> What is very fast is the onset of the flow; it's nearly instantaneous in all the metal. <S> Think about it like a water hose filled with (standing) water <S> : Even if the water flows slowly through it, turning on the pump will nearly instantly make water come out of the end. <S> So, no. <S> Current is not when the very same electrons leaving one terminal reach the other. <S> Current is the amount of electrons flowing due to the electric field that the voltage causes. <S> So, while the speed of electrons is pretty slow, the speed at which a change in electrical field propagates is very high – in fact, it's the speed of light. <A> You can imagine the piece of metal as a resistor. <S> Instead of being measured in ohms, the resistance would be more in the milliohm range. <S> It would be the same as placing a resistor across the terminals of the battery, but much more current would flow. <S> The electric field is what actually drives the current. <S> If you want to find the velocity of the electrons, use the calculator found here <A> There are a lot of electrons in an ordinary wire. <S> Wikipedia has a worked example of drift velocity showing a 2mm diameter (about AWG 12) copper wire carrying 1A has a drift velocity of \$2.3\times10^{-5}\$ m/s. <S> At room temperature the electrons are whizzing about at 1570 <S> m/s on average just from thermal effects, so if you attempted to observe the electrons' velocity alone you'd have to have a very accurate measurement to even notice the drift due to the 1A current. <S> To put it another way, to move the average electron in an AWG 12 wire a meter in a second you'd need a current of more than 40,000A. As it will fuse at around 240A <S> that's not going to happen under plausible conditions.
Because there is resistance, they are pretty slow. At ordinary currents, say 1A, the flow is 1 coulomb/second, which is about \$6.24\times10^{18}\$ electrons per second, which sounds like a lot, but it's not. The velocity of the electrons is determined by the cross sectional area and the amount of current.
Do all microcontrollers from one family have the same processor core? When we say “ family ” in the context of a micro-controller do we mean all the micro-controller models under the same family name share exactly the same processor core? For example ATmega128, ATtiny12 and ATmega16 are all in the same family namely Atmel’s AVR 8 bit family . Can we then say then the thing makes them the same family is their CPU and that is exactly the same? Do AVR 8 bit and AVR 32 bit families have different processors? <Q> AVR is a bit of a weird example because it's a single-vendor CPU core -- it's only present in certain parts made by Microchip (formerly Atmel). <S> A better example of the distinction between a microcontroller family and a processor core is in ARM microcontrollers. <S> There are many families of ARM microcontrollers (ST STM32, Atmel SAMD, NXP LPC, and many others -- some of which are divided into many subfamilies as well) but a much smaller number of ARM processor cores (Cortex-M0, M4, M7, etc), all of which are licensed from Softbank (formerly ARM). <S> Do AVR 8 bit and AVR 32 bit families have different processors? <S> Yes. <S> AVR8 and AVR32 are almost completely unrelated, beyond that they were both designed by Atmel. <A> I wouldn't say "exactly" the same code as much as "the processors are developed with migration between them in mind". <S> "Family" seems to be a loose term so treat it accordingly. <S> The STM32 microcontrollers are a family. <S> And within this family are different series such as STM32F3, STM32F4, STM32F7 <S> , STM32H7.But they certainly do not share the same core, <S> although their are similarities between the cores and some cores are super sets of other cores. <S> Members of the same series (also appears to be a loose grouping) do share the same core (except for multi-core versions where the asymmetric core may or may not be present) and can more-or-less run each other's code. <S> Just the peripherals are different (or sometimes an extra processor is present). <A> The word "family" is not a technical term in this context, so it has no fixed, standard meaning. <S> The only way you can be sure of compatibility between processor cores is is there is a published specification for the processor instruction set and architecture. <S> That's one of big advantages of using one of the ARM processor cores... <S> if you know how to write code for a Cortex-M3 processor you can write code for a Cortex-M3 from TI, NXP, or whoever. <S> ARM maintains specification documents for the Cortex-M3 architecture and anyone who sells a "Cortex-M3" must conform to those specifications.
That being said, I believe you are correct that the parts you mentioned all have very similar processor cores; the differences between those parts are primarily in the peripherals and memory size. Anybody can use the word "family" to mean anything they want.
Copper dimension around the PCB component's hole (pad?) I had my first PCB manufactured (by a Chinese manufacturer). In this first version I only used thru-hole components (not SMD). I used the latest version of Eagle to design it, without taking care to set the pad (I hope is the correct name) dimension and shape. The program "decided" to use only circular pad around the component's holes for all the components. Is the size of these pads (the dimension of the copper around the hole) in your opinion correct or it should be a bit larger (from what it is possible to see in the pictures attached)? I noticed that soldering some components, the solder has not immediately spread on the pad, maybe because they are a bit thin. I also don't know why sometimes the shape of the copper is oval and not circular in some circuit I have (not designed by me). I attached pictures of the top and bottom of a zone in my PCB. EDIT 1:with your help I would like to understand in which passage something went wrong. Eventually I can write an email to the manufacturer to understand better on their side. I opened the gerber file with gerbv and only switched on the files in the order TXT (drill), GBS and GBL. I attached 2 images with a focus in the second to capture the size of the anular ring (the gerber image corresponds more or less to the first real image but rotated). It is in your opinion the same dimension as in the real picture? <Q> The exposed copper around a through hole is called the annular ring . <S> The annular ring should be appropriately large to accommodate solder-to-pin wetting. <S> Your PCB fabricator should specify a minimum annular ring size, so be sure to check that. <S> For example, OSH Park specifies 0.127mm (5 mil) . <S> Otherwise, when you design your PCB, you can generally add 0.25 to 0.30 mm (10 to 12 mil) to the hole diameter. <S> (There are IPC standards such as IPC-7251 which you can follow if you need/want to.) <S> Conditions where you may want to increase the annular ring size: <S> Component needs physical strength, such as a connector or bulky heatsink Component <S> requires a large amount of current Component will be soldered by hand or some other less-precise operation <S> Annular ring size decrease or modification would be due to: Pitch or proximity of other pins on the component. <S> Often oval pads are used when extra surface area is needed, but other pins are too close to allow a circular shape. <S> Other considerations such as solder bridges (depending on process) <S> For more information, see: How to determine annular ring width for thru-hole pads? <A> Yes, those pads look usable -- but barely. <S> Yes, they should be bigger. <S> Generally you choose the footprint. <S> On a completely different note -- those pads look much smaller than what is found in the Eagle library. <S> I suspect that something happened between your Gerbers and the fab. <S> Either the fab couldn't read your aperture files, or you misnamed them. <S> Consequently, the fab applied their own defaults to how much copper should go around a hole. <A> When you create a PCB layout you first need to obtain the manufacturer's design rules from the intended manufacturer of the board. <S> Most PCB houses will supply their design rules in Eagle format so it is a simple matter to import them into Eagle. <S> If you want to have someone else assemble your boards then you also need to obtain the design rules, if any, for the assembler.
Pad size and shape are determined by the footprint that the layout tool uses.
Practical issue aiming non-visible infrared light at a reflector I built a beam breaker set-up, with the transmitter and receiver next to each other, and a reflector about two meters out. Basically, the transmitter is an IR LED and the receiver a TSOP. Between the two components, I placed cardboard. I am constantly getting readings from the Arduino terminal on the alignment of the beams (HIGH or LOW). I believe the set-up or individual devices are not faulty and all is working as should, including range, after extensive testing. When I place a retroreflector in front of the devices, up to about a meter, it works. Further, not so much. It really comes down to having the IR LEDs pointing the same way, in the same angle. They don't now, and if I move the reflector further away, there logically comes a point that one beam 'falls off' the reflector. After an entire afternoon of fidgeting, I finally managed to aim, using trail and error, only one IR LED at the reflector two meters further. Methods I tried, include plain measuring on flat surfaces against straight walls, using a laser pointer, bending and cutting the legs of the LEDs the exact same way, moving the reflector up, down, left and right and finally, keeping the reflector in place but move the IR LEDs up, down, left and right. The problem is that, even if I manage to align a LED, all it takes is a small accidental nudge to have to start all over. Does anyone have practical tips on how to aim IR LEDs? Transmitter An ATTINY, using a MOSFET to drive the LED near max. capacity despite the limit on the pin, drives the LED. It uses this code to get 36kHz and send bursts as not to overload the TSOP: // ATMEL ATTINY45 / ARDUINO//// +-\/-+// Ain0 (D 5) PB5 1| |8 VCC// Ain3 (D 3) PB3 2| |7 PB2 (D 2) INT0 Ain1// Ain2 (D 4) PB4 3| |6 PB1 (D 1) pwm1// GND 4| |5 PB0 (D 0) pwm0// +----+void setup(){ DDRB |= (1<<PB0); //Set pin PB0 as output DDRB |= (1<<PB1); //Set pin PB1 as output TCNT0 = 0; TCCR0A = 0; TCCR0B = 0; TCCR0A |=(1<<COM0A0); //Timer in toggle mode Table 11-2 - PB0 TCCR0A |=(1<<COM0B0); //PB1 TCCR0A &=~(1<<COM0A1); TCCR0A &=~(1<<COM0B1); TCCR0A |=(1<<WGM01); //Start timer in CTC mode Table 11.5 TCCR0B |= (1 << CS00); //Prescaler Table 11.6 OCR0A = 12; //CTC compare value, 36kHz}void loop(){ //cycle = 1/36 = 28μs TCCR0A |=(1<<COM0A0); //burst on for 10+ cycles TCCR0A |=(1<<COM0B0); delayMicroseconds(500); TCCR0A &=~(1<<COM0A1); //off for 14+ cycles TCCR0A &=~(1<<COM0B1); delayMicroseconds(1000);} Receiver The receiver is an active-low TSOP that, via a comparator, is connected to an Arduino calling the attachInterrupt() function. There are two LEDs and two receivers to determine direction. The C code calculates direction and spits out the result. <Q> Pulse the LED to let you run them brighter without overheating <S> so aim doesn't matter as much. <S> Build a mounting block. <S> so they are held rigidly in place relative to each other. <S> This may require the ability and tools to drill accurate holes at small angles. <S> It goes without saying to not rely on breadboards when alignment matters. <S> Consider placing a visible LED with the same FOV above the existing photodiode and IR LED so that you can aim them better. <S> Of course, this would require that those be rigidly attached to the IR devices so they remain in alignment. <A> The obvious answer is a retro reflector. <S> Cheap or expensive as done in industry or in surveying. <A> The problem is that, even if I manage to align a LED, all it takes is a small accidental nudge to have to start all over. <S> The solution for this is to make some kind of mounting hardware or fixture that holds the devices (LEDs and detectors) <S> so they won't be moved by a small accidental nudge. <S> Does anyone have practical tips on how to aim IR LEDs? <S> I'll disagree with what others have said. <S> I'd try to narrow the LED output beam rather than widen it. <S> Just a few degrees beam divergence should be enough to reach your detector if your target is a corner cube at a distance of ~1 m. <S> By using a wider divergence (10, 20, 30 degrees?) <S> you're just allowing your received power to fall off quicker as the target distance increases. <S> For really long distances, you could add a telescope in front of the receiver to increase its effective area.
You can place a collimating lens in front of your LED to focus its output into a narrower beam.
Activating Relay Coil (NPN and MCU) Circuit primitive in CircuitJS First, I'm having a hard time with transistors, and only recently managed to pass current through a NPN as planned (70-ish mA from 5 V with a 325 hFE 2N3904 with a 10K Ohms to the base). Second, I'm using a ESP8266 devboard, which can only pass 40-ish mA through its GPIO pins, and a Songle relay that needs 70 mA at the coil according to the specs. I have 3.3 V from my breadboard power supply going to the rail, then to the MCU Vin, and again from the rail to the relay's coil, and the other side of the coil to the NPN's collector; the emitter is connected to the ground rail, as is the MCU. The base is connected to a GPIO, via a 10K resistor; the hFE is 325. Everything is powered from the same battery, so is at the same potential (I used to try while the MCU was connected to USB, but I wasn't sure if this was a problem or not). I cannot get the relay to click, or even a LED to light. If someone could walk me through the math, and tell me what I'm doing wrong, I'd greatly appreciate. Transistors were pushed to me as "simple", and "like a switch", but it's neither to me, and the simple fact there's also PNP transistors kinda makes me want to cry. Again, any help would be dearly appreciated. I would post a picture of the setup, but there's a lot of stuff on the breadboard, and I'm not sure it'd be clear, but do let me know if it would be necessary. [Picture added] <Q> I had same problem too. <S> First assume your coil needs 100ma to click. <S> Then your hfe is 325 which is the amplification factor in common base mode. <S> That is the current from your base will be 325 times smaller than the current from your collector. <S> You told you gave 10k resistor in between them. <S> Assuming that esp is running exactly at 3.3v and the voltage drop across the base to emitter is 0.7v <S> we get a potential drop of 2.6v across the resistor. <S> This led to a base current of 2.6/10000 = <S> 0.25mA.This will result in a collector current of 0.25 <S> *325 <S> = 81mA. <S> Which is barely enough for the required specs. <S> You can either use <S> s a transistor with higher hfe or lower the resistor down to 6.4k or 5.6k then it will work. <A> After a while, and no more answers, I carefully inventoried all my transistors, and picked a transistor which's tolerances were closer to what was required of it, and it worked... <S> It managed to pass all the current the coil needed, and the relay clicked. <S> I was very happy, but I still couldn't understand why this transistor worked, and why the other one didn't... <S> What was really bugging me was that, besides the "amplification", there were no settings in my circuit simulator, which is otherwise pretty complete, and it would've surprised me if there had been other significant parameters for the transistor... <S> I decided to replace the new transistor in the circuit with the old one, just to see if it'd work, and indeed, the relay clicked. <S> The problem was that, besides the base being saturated, my N23904 transistor wasn't letting more than 40-ish mA through, which wasn't enough to quench the 70-ish <S> nA the coil needed (as per spec), <S> but if I just connected the coil on the 3.3 V line, it clicked just fine, and enough current could flow. <S> That was the problem. <S> But now the problem was gone... <S> The relay was clicking just fine... <S> I'm still not sure what the problem was, maybe there's a difference I'm not seeing between what's in the picture, and my re-wired circuit; maybe there was a bad connection, maybe one of the terminals was oxidized, and the resistance wouldn't allow more than 40 mA on 3.3 V; maybe my understanding just wasn't good enough to make it work, and it still isn't good enough to be able to tell what was wrong, ... <S> Whatever the case may be, it works now, and I understand transistors a bit better than I did, and I'm going to keep in mind that maybe it was a connection problem, and see if maybe I don't need a new breadboard or something... <S> I kinda wish I had measured the resistance now, but it's too late for that. <S> At any rate, thanks again to those who tried to help, and I am still glad to be part of the community. <S> Cheers, <A> It would be a good idea to add a flyback diode around the relay coil. <S> For example, if you need 100ma CE current with a hfe of 325, that means you need a min of 307ua in the base, so I would shoot for 400 or 500ua. <S> Some people just look at the curve for the transistor and dump a load of current in the base to force the transistor into saturation no matter what.
Also you should probably have some margin in your base current.
Why transformer is heating without load I'm working as technician in computer store and I have a lot of scrap transformers . Most of them small and have 6-8 pins .I wonder, can I use them for my projects ?If I remember correctly these transformers only work with high frequency and if you feed them low frequency they burn or explodeBut I don't know why they heat up without load on secondary coil .Also I think this the same reason you can't get 5000v with 9V transformer by connect it reverse <Q> The transformer is relying on the magnetic field in the transformer core to limit the flow of current. <S> If you took away the core, and connected the winding directly to your 220V supply, then it would rapidly burn out, as the inductance is not enough to limit the current flow. <S> 50Hz (and 60Hz) <S> transformers normally use a laminated iron core. <S> These are made of sheets of steel, with an insulating lacquer between them to stop eddy currents from flowing. <S> Adding the iron core creates a strong magnetic field, which opposes the current flow through the transformer primary. <S> These cannot be magnetized as strongly as iron. <S> So when connected to a 50Hz supply, the core "saturates" - it becomes as strongly magnetized as it can be. <S> The magnetic field isn't enough to limit the current through the primary, and the transformer overheats. <A> Remember the transformer equation that relates Volts ,Frequency ,Turns ,Core Area and magnetic Flux density .You are dishing up 50Hz at <S> 14 V .The surplus switchmode transformers Could be designed for 50KHz <S> so if you must test at 50Hz you would want 250mV or less to avoid saturation of the ferrite core .Why <S> not use a signal generater set at say 100KHz ? <A> Even without a load on the secondary of s transformer, even with the secondary open-circuited, there’s current in the primary. <S> Why? <S> Because the primary coil is an inductor. <S> That current is determined by the drive voltage and frequency and the inductance: $$ <S> I = <S> V /{\omega L} <S> $$ <S> A transformer built for high frequencies, like yours, keeps the current reasonable with a pretty small inductance because the frequency is high. <S> If you run it at lower frequency, like 50Hz, this inductive current goes way up. <S> Then even the small resistance of the coil combined with that large current generates heat. <S> You wouldn’t expect a transformer to work right at DC, right? <S> By driving it at much lower than the designed frequency, you’re getting too close to that point. <S> You might be concerned that this inductive current violates to usual <S> \$I_s/I_p=n_p/n_s\$ rule. <S> That rule is associated with in-phase current, the kind that carries power. <S> The inductive current in a proper transformer is out-of-phase, and doesn’t convey net power ( <S> as you’ve seen, that’s just an approximation, and the windings’ resistance can vary that phase enough to heat the windings)
High frequency transformers use a different core material, often ferrite (a ceramic that contains a lot of iron).
Rotary switch to turn LEDs on cumulatively Problem: I have a 12 position rotary switch and 11 LEDs that I want to switch on one after the other until they are all on (on pos 12). Limitations: It would be easy to use a μC, but I wanted to keep this simple and the part count as low as possible Things I tried: In my head I planned on using a diode for each pin of the switch to connect it to the previous (works for about 2 leds) but forgot about the diode voltage drop that adds up.And using a diode for every connection (about 66) is just to much of a mess…@jonk explains this approach perfectly in his answer below. Question: Any elegant ideas what I could do to archive that?I'm a beginner and probably miss something :) Other ideas I found cumulative rotary switches but they seem to be quite rare and I did not find one with 12 positions Using an incremental rotary encoder to feed a two shift registers instead (would this work? / might be easier to cave and use a μC) <Q> This answer was written before the OP commented that s/he is using a 3.2 V supply. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> The simplest option if a high enough voltage supply is available. <S> With SW8 closed D1 to D7 will light. <S> simulate this circuit Figure 2. <S> For lower supply voltages the chain can be split. <S> In this case SW8 being closed lights D7 but also provides a ground to light all the LEDs in the upper chain. <S> You can further refine Figure 2 for lower supply voltages but would require more and more diodes. <S> Figure 3. <S> A single-pole 12-way switch will suffice. <S> Constant current sources (by me): AL5809 constant current driver . <S> Simple constant current driver . <S> (This is in the negative line.) <A> From this datasheet I find the following diagram for your rotary switch: <S> It's very simple. <S> Rotation simply moves the line from one position at the bottom of the above diagram to another. <S> So you can connect A to any one of the bottom, numbered terminals. <S> (But no more than one, of course.) <S> You mentioned the idea of diodes and because of that I'll take that approach and run with it. <S> I think you understand it, already. <S> So I'll capitalize on that fact. <S> Transistor in his answer already gave a nod in the following direction <S> and I think I'd like to elaborate it out a bit so you can see why he wrote, "...would require more and more diodes. <S> " <S> Here's the schematic diagram. <S> Notice the pattern? <S> Notice a whole lot of diodes? <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The wire-OR diodes are numbered from #13 to #78, so you'll need 66 of them. <S> These are likely just 1N4148 diodes (cheap and available.) <S> Note that switch position #2 has only one diode going away from it and towards LED #2. <S> But switch position #3 has two diodes going away from it, one towards LED #2 and one towards LED #3. <S> Etc., until you reach switch position #12 where there are 11 diodes going to each of the LEDs. <S> You can work out the resistor value based upon your supply voltage rail's value ( \$+V\$ ), the estimated voltage drop across the LED ( \$V_\text{LED}\$ ), the estimated voltage drop across a 1N4148 diode ( \$V_\text{D}\$ ), and the desired LED current ( \$I_\text{LED}\$ ) as: \$R\approx \frac{+V-V_\text{LED}-V_\text{D}}{I_\text{LED}}\$ . <S> (Select a nearby standard value.) <S> This is probably why it would be cheaper/better to just get an MCU to do this for you. <S> All those diodes are then just some internally computed logic expression. <S> And you can even handle subtle things like whether or not your rotary switch is a make before break or break before make type and any appropriate debouncing issues that may "clean up" any noticeable issues you find. <A> For example OR gates and AND gates would work. <S> These standard chip packages contain 4 gates, so it would require 3 chips to drive 11 lines. <S> Examples part numbers of the 3.3v logic equivalents are: MC74VHCT32A and 74LVT08. <S> Note that if you were to use 3.3v logic parts then later wanted to move up to a 5v system some 3.3v logic parts are not compatible with a 5v Vcc. <S> Alternately a standard CMOS non-inverting buffer might work too, some of these are rated for a 3v Vcc minimum. <S> The CD4050 type has 6 gates per chip so only 2 chips would be needed. <S> The 3 circuits below show only 3 lines of each type. <S> Note that for the 2 input gates the last gate (line 11) would have both inputs shorted together. <S> .... <S> simulate this circuit – <S> Schematic created using CircuitLab
Instead of a sack full of diodes you could use logic gates from the 3.3v family. Two types that should work would be 7432 (quad OR gates) or 7408 (quad AND gates). In the above diagram, I've numbered your LEDs and their current-limit resistors from #2 to #12 (holding #1 in reserve as that LED you are not implementing.) You would need 11 total lines for the 11 active switch positions and LEDs. If you were to step up a 5v system you could use the standard 5v TTL versions that are fairly low cost.
Why/when is AC-DC-AC conversion superior to direct AC-AC conversion? I am currently studying wind power and the power electronics used for it. In wind power a generator is driven by wind, thus the resulting power is of widely varying frequency and amplitude. The power grid, in turn, has strict requirements for the input power in terms of frequency, phaseshift and sinusoidal form. For this reason, power converters are today used routinely in wind power. The predominant way to get the power into the grid is to use an AC-DC converter followed by a DC-DC converter and a DC-AC converter. This seems rather complicated instead of using a single direct AC-AC converter. Why is the indirect conversion via the DC "in-between" route preferable? (This is actually a repost from Engineering , since I only found out later that there is a more active, thematically fitting, non-beta Electrical Engineering.) <Q> There is a type of converter which can do this: the matrix converter. <S> In theory it can take many phases in and produce many phases out at quite a wide range of frequencies. <S> It also has the additional benefit of not needing any power passives (in theory), or no large capacitor, no large inductors. <S> However, there are two golden rules with matrix converters <S> Thou shalt not short circuit the supply Thou shalt <S> not open-circuit the load <S> It is point #2 that makes the topology impractical as a simple loss of power will cause the inverter to blow up. <S> There is a variant of the matrix converter called the cycloconverter which uses thyristors and does not suffer the same issues as a full matrix converter. <S> It, however, has a limitation of only being able to synthesise an output frequency around 1/10th of the input frequency. <S> This limitation is fine for marine which typically uses 400Hz electrical supplies so generating 40Hz isn't too limiting for propulsion <S> So why AC-DC-AC instead of direct AC-AC ... <S> The complications and limitations. <S> A six switch inverter is extremely versatile. <A> When two routes are possible, there is rarely a good answer for why one particular one was chosen. <S> It's often accidents of history, or advantages to one or the other depending on local industries, or common components. <S> There is an all electronic route directly from 3 phase AC at one frequency to another, it's called a Matrix Converter. <S> It contains 9 switches in a 3x3 matrix, to connect any phase to any other. <S> With suitable timing of the switch instants, and suitable input and output filters, it can create a similar output voltage to the input. <S> They are becoming increasingly used for motor drives. <S> However, I can think of many advantages to using an intermediate DC link. <S> AC-DC and DC-AC converters are being made in large numbers, in large sizes for DC links where long distance transmission is a factor. <S> This will lead to economies of scale. <S> They are more mature than matrix converters, so with the long planning involved in electrical infrastructure are more likely to have been chosen. <S> Wind turbines tend to be connected in short hops to hubs before being connected to a single long distance transmission line (very long in the case of offshore). <S> It's easier to pool power at a nominal DC intermediate voltage, simplifying control. <S> It's easier to stay DC for the long transmission. <A> The reason for direct AC-AC conversion is the size and mass of the DC choke coil (or capacitor array). <S> You don't want to have that e.g. in a rubber-wheeled subway car or aircraft. <S> In iron-wheeled trains it depends, because more mass means better friction. <S> That doesn't apply to buildings. <S> You cannot save on valves (transistors or thyristors). <S> In contrary, AC-AC converters tend to have more valves (though smaller ones) than AC-DC-AC converters. <S> The control concept is also much more complicated. <A> AC-DC-AC conversion wins when you have several different AC sources to combine into a single AC output, (or when you have the opposite). <S> each asynchronous generater produces an AC supply that is rectified and boosted to a DC bus voltage, the bus voltage then feeds a grid tied inverter. <A> There is also military 400 Hz power, which can result in a considerable size-reduction. <S> In my particular case, I needed access to motors that worked within a vacuum chamber. <S> Equipment for NASA and military use was available that met our requirements, so we opted to use 400 Hz power. <S> I recognize that it is rather specialized, so it is probably not applicable to you.
One advantage of the AC-DC-AC conversion is that you can convert the frequency of the AC.
Relaxation oscillator, how to change the duty cycle I have a Relaxation Oscillator here. What I'm trying to do is to change the duty cycle of ca 50% to 10%. What I thought of and tried to change R1 and C1 values so the time the capacitor loads and unloads changes. But it stays at ca 50%. What I am trying to find is a way to change the duty cycle to 10%. <Q> Charge the capacitor faster (or slower) than you discharge it. <S> For example, replace R1 with this: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Now, when discharging (Vout low), D1 conducts putting R1 and R2 in parallel. <S> This will reduce the period, but also increase the duty cycle (by decreasing the low period). <S> If you wanted to reduce teh duty cycle, reverse the diode. <A> You can use diodes to separate the current directions. <S> Adjustable duty cycle without changing the frequency very much can be achieved by replacing R1 with this: <S> 47 kOhm potentiometer has been chosen because it's generally available. <S> Have a linear scale version. <S> You have only plusminus 5V operating voltages. <S> The voltage drop in 1N4148 is a little unpredictable and it's variance can be too large, if you expect high precision results. <S> In that case you should use mosfet switches to charge and discharge C1 through resistors. <A> Instead of grounding R2, return it to a large negative voltage. <S> If you trust your -5V DC supply, use that. <S> You can split R2 into two resistors for this purpose: their ratio affects duty cycle. <S> Perhaps not the best way, because this method includes the op-amp saturation voltage.
Instead of a single resistor R1 use different resistance values for charging and discharging the capacitor.
Convert 24V to 12V @ 30mA I have a device, Neptune Apex, that has programmatically controlled 24VDC outputs. Using the 24V I would like to trigger a relay that is built into another component, that has the specs of: 3-12VDC, 3-30mA. Since the current draw is very minimal and will be constant, what would be the cheapest and easiest way to reduce the 24VDC to 12V to trigger the relay? I don't really want to have to purchase and introduce another component like a buck converter, so could I do this with a voltage-divider or some other simple components and not burn anything up? Update: The 3-12VDC device is a Power Switch Tail II ( http://powerswitchtail.com ) and it it does appear to use a opto-isolated relay. Does this help to nail down the best solution? <Q> Using the 24 V <S> I would like to trigger a relay that is built into another component, that has the specs of: 3 - 12 VDC, 3 - 30 mA. <S> This sounds awfully like a solid state relay input which is, basically, an infrared LED and series resistor. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> SSR with external current limiting resistor. <S> An infrared LED will drop about 1.4 V. <S> If the SSR draws 30 mA at 12 V then the internal resistor is about \$ \frac {V}{I} = \frac { <S> 12 - 1.4}{30m} = 353 \ <S> \Omega \$ <S> (where 'm' is shorthand for milli). <S> Addition of an external 1.8 kΩ resistor would limit the current to about 10 mA at 24 V. <S> This is well above the 3 mA minimum on-current. <A> The resistor should have <S> \$R = \frac{12 V}{30 mA} = 400 \Omega\$ and be able to dissipate more than <S> \$ P = <S> 30mA <S> * 12V <S> = 360 mW\$ , I would sugest a \$ \frac{1}{2} W\$ one, for a safety margin. <S> You could also make a resistive divisor such as this one. <S> It is still nor well regulated, but it works for your application. <S> Beware you will need that the resistor R1 have 400 Ohms and 1 W dissipation and the resistor R2 have 400 Ohms and have 1/2 W dissipation. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Watch out that both resistive solutions are not well regulated and can lead to problems, prefer the zener method. <A> Since the current draw is very minimal and will be constant, what would be the cheapest and easiest way to reduce the 24VDC to 12V to trigger the relay? <S> A resistor would be best 400Ω ought to get you 12V with 30mA from 24V. The downside would be the heat burned up in the resistor (360mW so use a 0.5W resistor). <S> Other than that another awesome circuit might be an adjustable current source: Source: https://diyaudioprojects.com/Technical/Current-Regulator/
You can use a Zener and a transistor, such as in Explaination of high current zener transistor regulator circuit or use a resistor in series with your load, the relay. When the relay is not drawing current it outputs at most 12V, when it draws current it reduces the voltage but it remains within your specifications.
Size a power supply for a DC motor? I'm wondering how to size a power supply for DC motors. I have a 70W 24V brushed DC motor [Linix 63zy24-70-A I picked up from a surplus store]. I intend to control this using a PWM/MOSFET. If 10 bucks is 10 bucks, 70 watts is 70 watts. However, thinking of the inrush current, I measured a resistance of 6Ω, so with 24V/6Ω = 4A, and therefore I might need a 96W supply. I haven't worked with DC motors enough to know if that makes sense (seems low, but maybe not). My bench supply has a current limiter that tops out at 600mA, so I can't test it. I am more familiar with sizing AC motors to the NEC, which involve an inrush current of 6 (or more) times the full load current, and any motors which can't be run at a locked rotor current are usually given an overload protection of 115-125% of the FLC. I would imagine that using an obscenely large DC power supply would work, but I'm hoping more experienced folk can point me in a more reasoned direction. As an example, I'm looking into something like this: https://www.digikey.com/product-detail/en/LRS-150-24/1866-3321-ND/7705015/?itemSeq=298448846 Edit: Thanks for all your answers. I wasn't realizing how much effect the controller can have on the power supply requirements. Below is what I am planning on making. The PWM signal will be from a 555, ATtiny or Arduino. The BJT is there to (hopefully) make that choice irrelevant. More application details: The motor is magnetically coupled to a pump, so high-torque is not reasonably possible. Flow rate is currently controlled manually using an ball valve on the output of the pump attached to a single phase shaded pole motor. I'm looking to change that. <Q> It depends what you want the motor to do. <S> If you are happy for a slow start, then you only need a supply that will limit without foldback at the running current. <S> You'll get full running torque, just not stall torque. <S> You choose. <A> For the sake of the power supply, the commutator and brushes, and the PWM switching devices, you should design the controller to limit the current to about 150% of the rated motor current. <S> The controller should measure the motor current and use the measure value as a feedback signal to keep the current continuously under control. <S> The speed command can be an armature voltage command. <S> The current command is the error signal obtained by comparing the speed command with the armature voltage. <A> The motor needs enough current to meet the inrush current spec. <S> If the only thing in the circuit is power supply, then it needs to source enough current to meet the inrush current spec. <S> Inrush current can also be limited by series resistance or an NTC to limit the effect the motor has on the power supply. <A> If you measure 6Ω, then the stall and starting current is as you calculated it 4A, <S> If you have a PWM controller turining it down to 72% will reduce the current to 2.88 A which will work fine with your 70W power supply.
If you want a rapid start, as from a low impedance supply, then you want a supply that will give you the full stall current.
How/can I plug a 3 wire 4-20mA current sink probe into a 2 wire 4-20mA loop powered controller? I have a 3 wire 4-20mA output probe simulated here , where during normal operation the 20v and 250 ohm resistor are variable chosen by the user. One of my users has a 2 wire 4-20mA meter, and I am unsure what the normal approach to 2 wire 4-20 is. Can i plug my probe into the user's controller? If so what wires go where? <Q> Figure 1. <S> (a) 2-wire and (b) <S> 3-wire. <S> Source: copied from my answer to Several Questions About Analog Input . <S> ... <S> but if I can have non isolated power provided externally can I tie my 3 wire signal line to the + terminal on the 2 wire controller and tie ground to the - terminal? <S> Yes. <S> Consider Figure 1b. <S> The 24 V supply is feeding the 3-wire transmitter while the grounds are connected. <S> The 24 V could be supplied from the receiver panel or could be supplied locally at the transmitter. <S> As an aside, the 250 Ω resistor shown in the receiver is the typical way to convert the mA signal to a 1 to 5 V signal for the internal ADC. <S> Figure 2. <S> OP's schematic. <S> You have placed the 250 Ω current sensing resistor in the high side of your device. <S> That means they can't share a common ground. <S> One of my users has a 2-wire 4 - 20 mA meter, and I am unsure what the normal approach to 2-wire 4 - 20 is. <S> Can i plug my probe into the user's controller? <S> If so what wires go where? <S> You can provided the user's meter is not grounded. <S> If it is you will have a short across the transistor and lower resistor - always give designations to components on schematics - resulting in 20 V DC applied across the 250 Ω resistor which will pass 80 mA. <S> Your configuration is unusual and this could give rise to problems on installation. <A> This TI chip is how a 2 wire input to 2 wire 4~20mA current loop is used so that ground noise does not degrade the result , yet is not galvanically isolated. <S> A 5V regulator inside the XTR117 is like the LM117/317 and needs a Vcc = 7.5 to 24V to drive the IC. <S> The chip is basically a current amplifier and <S> the series input R converts the input voltage to a small input current then the chip amplifies this current with an output offset of 4mA and a gain of <S> Io=100 <S> x <S> In to 20mA full scale. <S> The input may also be offset with a bias resistor to +5Vreg or some other source. <S> THere is also examples of how to protect it from reverse and over-voltage and RF interference. <S> This means you can choose Rin to have just about any scale of Vin {min,max} to equate to {4:20}mA output. <S> e.g. Vin = 0:1V, or 0 to 3.3V or 0 to 5V or 2.8V to 4.2V ...etc. <S> This IC can work in most applications unless there is a real need for high voltage or galvanic CMRR optical isolation. <S> But don't reinvent the wheel, THis IC is 0.3% accurate over temp and supply range. <A> Can I plug a 3 wire 4-20mA current sink probe into a 2 wire 4-20mA loop powered controller? <S> Short answer: <S> A three wire sender includes a + line, 0V line and sender output (source from + or sink to 0V). <S> You should read this . <S> If your Rx is loop powered then the connection would typically look like this: simulate this circuit – <S> Schematic created using CircuitLab <S> The first diagram shows a 4-20mA Sink, the second a 4-20mA source.
Yes Longer answer, the configuration will change depending on both the sender and receiver.
Full-bridge or half-bridge for a DC motor So I have a DC motor which draws around 16A at 12V. I want to have some room to play with in the top values. I need to control the spinnig direction as well as speed. I understand how half-bridges and full-bridges work. From that it seems to me that half-bridge is not an option for a DC motor as it can be used only for creating AC and not DC. On the other hand full-bridge can be used for both AC and DC. Am I right? I am asking because many sites mention the differences between the two, but I have not found a site that would mention the difference when using it to control a DC motor. Thank you for your replies <Q> With half bridge you can't change the polarity of the motor and thus you can't change the direction. <S> If you don't need to change the direction a half bridge is fine, otherwise you need an H bridge. <A> I understand how half-bridges and full-bridges work. <S> From that it seems to me that half-bridge is not an option for a DC motor as it can be used only for creating AC and not DC. <S> [...] Am I right? <S> The bold marked statement is incorrect. <S> Considering a half bridge being supplied by a grounded DC voltage: <S> If the upper transistor in the half bridge is continueously conducting and the lower is never conducting, the half bridge voltage will be equal to the upper rail voltage, so, equal to the DC supply voltage. <S> If the lower transistor in the half bridge is continueously conducting and the upper is never conducting, the half bridge voltage will be equal to the lower rail voltage, which is ground, which is DC as well. <S> If the upper and lower transistor are conducting in turns, the half bridge voltage will be equal to a PWM signal and can be considered as an AC wave with a DC offset. <S> So, if the supply voltage is DC, the half bridge voltage <S> always has a DC component. <A> For bidirectional control of a DC motor you typically need a full bridge made up of four FETs or transistors. <S> By setting either side high and the opposite low, you can make the motor spin in either direction. <S> For unidirectional control, you really only need a single FET or transistor. <S> You can leave one side of the motor permanently connected to the supply, and use the transistor to connect or disconnect the other. <S> Half bridges that are not packaged in pairs as full bridges are somewhat associated with AC motors particularly because of the situation of 3-phase motors where you end up needing 3 half bridges. <S> In that case you have three wires coming out of the motor, each of which gets driven by its own half bridge so it can be connected to either side of a DC supply. <S> But rapidly pulse-width-modulating the half bridges, something approximating an AC sine wave can be produced, and if the three bridges do this in an appropriate phase relationship (0, 120, 240 degrees) <S> then you can make the motor spin just as if it were connected to a 3-phase AC dynamo - but with the added benefit that you can vary the frequency of the synthesized AC to control speed. <S> Finally most bipolar stepper motors have two independent coils and require two full bridges.
Each lead of the motor is connected to a half bridge where the upper transistor can take it to the positive supply or the lower transistor can take it to the negative.
Will a coil of wire acting as an antenna improve reception/transmittance? I thought that a coil of wire might be good as an antenna. If this is valid would using said antenna improve reception and transmittance if used? Would an antenna of this kind be stronger with more turns in its winding? <Q> I thought that a coil of wire might be good as an antenna. <S> There are a lot of antennas that could be described as "a coil of wire", and some of them are pretty good, but there's a lot more to antenna design than just randomly coiling wires. <S> If this is valid <S> would using said antenna improve reception and transmittance if used? <S> As opposed to no antenna at all -- maybe. <S> If the size of the coil is closer to the wavelength of your signal than the antenna terminals on your radio, then probably. <S> Would an antenna of this kind be stronger with more turns in its winding? <S> Only if it didn't have enough turns to begin with. <S> If it already had enough, or too many, then no. <A> Short answer, probably not, no. <S> Radio waves (RF) is a highly complex field of study, separate from traditional electronics; it would take a book to explain even the basics of RF. <S> In essence, the antenna must be tuned to the frequency of interest. <S> "Tuning" is both a mechanical and electrical property. <S> "Tuning" is only good for a narrow band of frequencies centered around the target frequency (unless multiple antennas are used, but that imposes other issues.) <S> The biggest effect on tuning is the length of the antenna - shorter antennas pickup higher frequencies. <S> At higher frequencies, all sorts of physical and electrical properties change, which greatly compound their design. <S> So if interested in a specific frequency, it is far easier to just buy an antenna ready-made for that frequency. <S> If you want to research further, here is a RF Basics document from Maxim to get you started. <A> There are at several kinds of antennas which could be described as "coils of wire", with very different principles. <S> small loop antennas are actual coils (often with ferrite rods), though they are woven differently from power inductors or actuator coils, as self-capacitance must be reduced. <S> These are tuned by a variable capacitor which is used to complete the LC-circuit with a given resonant frequency. <S> Unlike most other antennas, they are coupled to the magnetic component of the EM-field, and are only good for reception. <S> full loop antennas look like coils, but operate as groups of folded dipoles. <S> They are tuned to a particular frequency by the perimeter of the loop which must match one wavelength. <S> helix antennas also look like coils or springs, although they are nothing more than quarter-whip monopoles folded into a helical shape to reduce the size. <S> They are (roughly) tuned a frequency by their unfolded length, which must be between one quarter and one half of a wavelength.
As opposed to an optimal antenna -- no, pretty much by the definition of "optimal".
Light load efficiency of DC-DC Switching converters I have been working with DC-DC Converters theoretically. What is it that I don't understand is that how the efficiency is very less at light loads? How does the switching and the output get affected? Light load means that the IC output has the stable output voltage but Load Current is less? Can someone help me understand the light load behaviour of DC-DC converters in simple terms? <Q> What is it that I don't understand is that how the efficiency is very less at light loads? <S> It's like a car sat at traffic lights <S> - you are not moving hence the fuel consumption in miles per gallon are zero or, power out over power in is zero or, zero efficiency. <S> A DC/DC converter always wastes a few milliwatts of power in just doing nothing except providing a fixed voltage at the output. <S> If there is no current supplied to a load then the power output is zero and the power efficiency has to be zero. <S> How does the switching and the output get affected? <S> To minimize power losses, some switching converters enter a special mode known as burst mode and the output switching is very infrequent leading to reduced power losses. <S> The (near) equivalent for a car sat at traffic lights is turning the engine off then restarting the engine as the lights turn green. <A> The lower efficiency at low output currents is easily explained if you consider that the DCDC converter also consumes some quiescent current. <S> This is the current which the converter needs to operate. <S> Let's look at an example. <S> Suppose I have a DCDC converter which consumes 100uA when operating. <S> So even when \$I_{load}\$ = 0 <S> A, \$I_{in}\$ <S> = 100 <S> uA <S> Now suppose we apply <S> \$V_{in}\$ <S> = 10 V and configure the converter to output: <S> \$V_{out}\$ = <S> 5 <S> V <S> No we load the converter with an \$I_{load}\$ = 100 mA, this then means that \$I_{in}\$ = <S> 50 mA + 100 <S> uA = <S> 50.1 <S> mA <S> That's an efficiency of \$P_{out} / P_{in}\$ = <S> (5 V * 100 mA) / <S> ( 10 V * 50.1 mA) = 99.8 % <S> That's a very high efficiency! <S> I'm assuming that the quiescent current of 100 uA does not change for small or large load currents <S> , that isn't a realistic assumption but easier for this explanation. <S> Now let's do the same calculation bit for \$I_{load}\$ <S> = 100 uA, so a very light load : <S> \$I_{in}\$ will now be: 50 uA + <S> 100 uA = 150 <S> uA <S> The efficiency will then be \$P_{out} / <S> P_{in}\$ = <S> (5 <S> V * 100 uA) / <S> ( 10 V * 150 uA) = <S> 33.3 % <S> That's a lot worse! <S> This is caused by the fact that the load current is not significantly larger than the quiescent current. <S> If the load current was significantly larger than the quiescent current then the quiescent current sort of becomes irrelevant (too small to make a difference). <A> Light-load efficiency is the hot topic these years. <S> It impacts ac-dc but also dc-dc converters. <S> There are several techniques available to improve efficiency in light-load operations: <S> burst mode or skip cycle <S> : it is the easiest and simplest method to implement. <S> This is a hysteretic behavior and can suffer audible noise issues if the burst occurs at a high peak current: <S> the inductor or the transformer can chime but passive components also sometimes. <S> There are known techniques to limit these effects. <S> Skip cycle usually implies uncontrolled output ripple. <S> Frequency foldback: rather than switching at a continuous 100-kHz frequency (or above) in all conditions, an internal variable signals that output power is getting lower. <S> A voltage-controlled oscillator then regulates the converter by decreasing the operating frequency down to 20-30 kHz so as to remain outside of the audible range. <S> It can happen at a fixed or variable peak current setpoint. <S> Then, if the power still reduces further, the controller enters skip cycle operation. <S> Frequency foldback is nice and efficient. <S> It does not suffer from output ripple as with classical burst mode. <S> It is present on many ac-dc and dc-dc converters. <S> constant on-time: in this mode, the controller drives the off-time duration while the on-time is fixed. <S> At high power, the part switches at high frequencies (small off-time) and then decreases the off-time (switching period expands) when the output power goes down. <S> As a result, it naturally ensures a low switching frequency in light load and efficiency is excellent. <S> Audible noise problems can appear and a minimum frequency limit has to be set. <S> Cool thing, you don't need slope compensation in a true on-time current-mode controller. <S> pure hysteretic: <S> if you use a hysteretic controller like the old and venerable MC34063 - or the µA78S40 from Signetics - but who remembers? :-) <S> , then in light-load conditions the switching recurrence is extremely long and efficiency benefits from this mode. <S> The 34063 was known for having audible noise problems but more modern approaches use techniques to compensate the frequency spread. <S> All switching losses in semiconductors (turn-on and turn-off events, \$Q_{rr}\$ losses in diodes) magnetics losses, they all scale down with frequency. <S> So reducing it naturally lowers the loss budget in the overall efficiency calculation.
When the load gets lighter, the feedback voltage passes below a certain threshold and the continuous switching pattern is interrupted until the feedback voltage goes back above it.
Embedded C - Most elegant way to insert a delay I'm working on a project involving a cortex-m4 mcu (LPC4370).And I need to insert a delay while turning on compiler's optimization.So far my workaround was to move up and down a digital output inside a for-loop: for (int i = 0; i < 50000; i++){ LPC_GPIO_PORT->B[DEBUGPIN_PORT][DEBUG_PIN1] = TRUE; LPC_GPIO_PORT->B[DEBUGPIN_PORT][DEBUG_PIN1] = FALSE;} But I wonder if there's a better way to fool GCC. <Q> The context of this inline no-dependency delay is missing here. <S> But I'm assuming you need a short delay during initialization or other part of the code where it is allowed to be blocking. <S> Your question shouldn't be how to fool GCC. <S> You should tell GCC what you want. <S> #pragma GCC push_options#pragma GCC optimize ("O0") <S> for(uint <S> i=0; i<T; i++){__NOP()}#pragma GCC pop_options From the top of my head, this loop will be approximately 5*T clocks. <S> ( source ) <S> Fair comment by Colin on another answer . <S> A NOP is not guaranteed to take cycles on an M4. <S> If you want to slow things down, perhaps ISB (flush pipeline) is a better option. <S> See the Generic User Guide . <A> Use a timer if you have one available. <S> The SysTick is very simple to configure, with documentation in the Cortex M4 User guide (or M0 if you're on the M0 part). <S> Increment a number in its interrupt, and in your delay function you can block until the number has incremented a certain number of steps. <S> Your part contains many timers if the systick is already in use, and the principle remains the same. <S> If using a different timer you could configure it as a counter, and just look at its count register to avoid having an interrupt. <S> nop doesn't have to take time, the processor can remove them from its pipeline without executing it, but the compiler should still generate the loop. <A> Not to detract from other answers here, but exactly what length delay do you need? <S> Some datasheets mention nanoseconds; others microseconds; and still others milliseconds. <S> Nanosecond delays are usually best served by adding "time-wasting" instructions. <S> Indeed, sometimes the very speed of the microcontroller means that the delay has been satisfied between the "set the pin high, then set the pin low" instructions that you show. <S> Otherwise, one or more NOP , JMP -to-next-instruction, or other time-wasting instructions are sufficient. <S> Short microsecond delays could be done by <S> a for loop (depending on CPU rate), but longer ones may warrant waiting on an actual timer; Millisecond delays are usually best served by doing something else completely while waiting for the process to complete, then going back to ensure that it has actually been completed before continuing. <S> In short, it all depends on the peripheral. <A> The best way is to use on-chip timers. <S> Systick, RTC or peripheral timers. <S> These have the advantage that the timing is precise, deterministic and can be easily adapted if CPU clock speed is changed. <S> Optionally, you can even let the CPU sleep and use a wake-up interrupt. <S> Dirty "busy-delay" loops on the other hand, are rarely accurate and come with various problems such as "tight coupling" to a specific CPU instruction set and clock. <S> Some things of note: <S> Toggling a GPIO pin repeatedly is a bad idea since this will draw current needlessly, and potentially also cause EMC issues if the pin is connected to traces. <S> Using NOP instructions might not work. <S> Many architectures (like Cortex M, iirc) are free to skip NOP on the CPU level and actually not execute them. <S> If you want insist on generating a dirty busy-loop, then it is sufficient to just volatile qualify the loop iterator. <S> For example: void dirty_delay (void){ for(volatile uint32_t i=0; i<50000u; i++) ;} This is guaranteed to generate various crap code. <S> For example ARM gcc <S> -O3 <S> -ffreestanding <S> gives: dirty_delay: mov r3, #0 sub sp, sp, #8 str r3, [sp, #4] ldr r3, [sp, #4] ldr r2, .L7 cmp r3, r2 bhi .L1.L3: ldr r3, [sp, #4] <S> add r3, r3, #1 str r3, [sp, #4] ldr r3, [sp, #4] cmp r3, r2 bls .L3.L1 <S> : add sp, sp, #8 bx lr. <S> L7: <S> .word <S> 49999 <S> From there on you can in theory calculate how many ticks each instruction takes and change the magic number 50000 accordingly. <S> Pipelining, branch prediction etc will mean that the code might execute faster than just the sum of the clock cycles though. <S> Since the compiler decided to involve the stack, data caching could also play a part. <S> My whole point here is that accurately calculating how much time this code will actually take is difficult. <S> Trial & error benchmarking with a scope is probably a more sensible idea than attempting theoretical calculations.
If you really want to do it in software, then you can put asm("nop"); inside your loop.
PCB design using code instead of clicking a mouse? I'm just getting into PCB design and I generally have trouble learning how to use GUIs with menus and mouse clicking. Do some engineers use something more like code to generate PCBs? <Q> Altium has a scripting language, several different languages, in fact. <S> In some cases it may make sense to use algorithms, for example to create repetitive layouts or parts placed precisely in certain positions. <S> For example, I've seen it used to place parts (LEDs) in a circular pattern, though with their introduction of polar coordinate snap grids <S> that's much less likely to be worth the hassles. <S> In general, code is poorly suited for PCB layout purposes, particularly so for the routing task. <A> In EAGLE CAD, everything you can do with a mouse you can do from the command line. <S> It also has a user language that you can essentially program in. <A> I use code to generate the netlist for my PCBs. <S> (It's far, far quicker to write a for-loop to build an N-bit multiplexer than to laboriously draw all that stuff in a schematic.) <S> Fortunately, KiCAD netlist files are just text, in a format that's undocumented but reasonably easy to reverse-engineer. <S> I wrote a small C# library that lets me type in what I want <S> connected to what (in terms of reusable parametric blocks of circuitry), and it automatically spits out a netlist. <S> Now I can just import that straight into KiCAD and start building the PCB, without having to waste an hour or so uselessly drawing a schematic for it. <S> (The library even does some very basic checks to ensure my instructions aren't completely bogus, although I suspect KiCAD itself would do that job better.) <S> KiCAD PCB files, on the other hand, are also text, but appear to be far too complex to generate programmatically. <S> Which is a shame, because KiCAD defaults to dumping all the components directly on top of each other, requiring me to spend 20+ minutes tediously separating them out again so I can see what the hell I'm doing. <S> (KiCAD has a nasty habit of trying to move the text rather than the component it's attached to , presumably just to make PCBs harder to design?) <S> Perhaps some day I'll manage to also automate the initial component layout; I suspect, as others have said, that the actual "PCB design" bit will always involve the GUI though. <A> The code looks like this: G90 <S> * <S> 1 G70 <S> * <S> 2 G54D10 <S> * <S> 3 G01X0Y0D02 <S> * <S> 4 X450Y330D01 <S> * <S> 5 X455Y300D03 <S> * <S> 6 G54D11* <S> 7 Y250D03 <S> * <S> 8 Y200D03 <S> * 9 Y150D03 <S> * 10 <S> X0Y0D02 <S> * 11 <S> M02 <S> * 12 <S> The line numbers at far right are not part of the file. <S> Examining this file without any prior knowledge of Gerber one would correctly deduce that each line represents a particular machine command and that the asterisk (*) is the end of command character. <S> There seems to be different kinds of commands: instructions beginning with G, D, M and x,y coordinate data. <S> Source: <S> https://www.artwork.com/gerber/appl2.htm <S> Here is a spec that shows the different commands <A> If you are using Kicad, see some of the videos from the talks at the first KiCon (2019) on Youtube. <S> A couple of the talks were explicitly about the presenters writing tools to generate the parts, one also about the connections. <S> At least one used python, which is baked in as a scripting language.
We've used algorithms to create shapes (think antennas and that kind of thing) directly in .dxf format which can then be imported into a copper layer. I wouldn't recommend it, but if you really desire you can write PCB artwork yourself, without any PCB program.
ULQ2003 not driving a Relay properly I am driving a 9V DC (5 PINS) Relay from darlington array ULQ2003.The ULQ2003 COM pin is connected to 9V power supply while the Relay coil is connected to +12V supply. The Ground of both are connected together. When 'OUT1' is at LOW then the Relay is operated and its 'NO' contact is connected with the 'C' pin. When 'OUT1' is open-circuit (floating) the Relay remains latched and its 'NO' contact remain connected with 'C' pin. If I check voltage at 'AB' points then I get positive voltage there. V(AB) = 2.3V --> that is VA is 2.3V higher than VB. My understanding is that after the Relay power is turned-off then a reverse polarity voltage will appear across its coil, and for that we put a reverse diode in parallel with the Relay coil. I cannot understand why the Relay is not able to de-energize completely when its circuit is break? <Q> The issue here is that you have COM connected to 9V. <S> Because of this, when you try turn the output of the ULQ2003 off, effectively you will still have 3V going through the coil of the relay, which is enough to keep it energised <A> Figure 1. <S> ULN2003A internal schematic. <S> The ULN2003 has internal snubber diodes. <S> These are connecting your relay from +12 V to +9 V when the outputs are off. <S> If the hold-on voltage is < 3 V then, once the relay is energised, it will never turn off. <S> You can omit the diodes on your relays then. <A> There flows the current from 12V to 9V through the COM pin. <S> Hence, it is always in excited state. <S> To prevent this, you, dont need to give Voltage at COM pin or provide 12V to COM pin. <S> This will work.
Instead connect pin 9, COMMON, to your +12 V supply. If you look at the internals of the ULQ2003, you will see that between its output and COM there is a diode, the same diode that you have added in parallel to the coil of the relay.
Real thing or just misinterpretation? Rotating batteries to make TV remote work When TV remote isn't working, I rotate the batteries in the remote in their place and it begins to work. I don't see any kind of deposit between the batteries and the contacts. Rotation of battery doesn't seem to be a meaningful factor by itself that can change the energy output. So, how? <Q> The contact areas (of the battery and battery holder) are probably bad. <S> By rotating them, you'll grind the contacts a bit and the contact points will change all the time. <S> You probably turn the batteries (repeat the described process) untill the contact areas are connected well enough so the remote is working again. <S> Bad contacts may not be always clearly visible. <S> Maybe cleaning the contacts with a glass fiber pen/brush will improve it. <A> When the batteries are almost too low to operate the device anything that improves conductivity will help, shaking the remote, twisting the batteries, warming them, and even pressing the buttons extra hard can get the remote to function when electricity supply is marginal. <S> The nickel plating on the battery contacts forms a thin invisible oxide layer, but it's not usually thick enough to stop the remote from working, twisting the batteries <S> breaks that layer and slightly improves the connection. <S> warming the batteries enhances the strenght of the chemical reaction inside the batteries, that helps too. <S> What you should do is replace the batteries, that will work for longer. <A> I do this all the time, it doesn't matter what order the batteries are seated, as long as they are disconnected and then reconnected. <S> It also works in devices with one battery. <S> This is my guess, I do know that batteries develop polarization as they are discharged. <S> Even with the small current, it polarizes the battery and reduces the voltage. <S> You can see how current changes with the amount of current in a battery, if there is no current, then the battery goes back to it's nominal voltage, and the voltage increases. <S> Source: <S> https://www.researchgate.net/publication/284154682_Modeling_Li-ion_battery_capacity_depletion_in_a_particle_filtering_framework/figures?lo=1111 <S> It's possible that the polarization effect increases with depth of discharge, but I couldn't find any experimental evidence that would suggest this, might be fun to run some experiments on polarization over time.
If the battery is removed, the voltage recovers and resets the battery somewhat (but is only temporary because the battery is mostly discharged and at the end of it's life) In a device like a remote, there is a small leakage current (most likely below the uA range).
Wire capacity - Total power or just current? So I've gotten myself into a healthy, educational Facebook discussion. Link is here if you're a member of that group. When I thought about it, everyone defines wire capacity by current only. The voltage only comes in when talking about wire insulation. However, I've always thought about a wire's capacity by total power transmitted, not just the current. Assuming arcs and corona discharges aren't a problem, could I transmit say 10 kW via a #40 AWG wire at 10mA, 1MV DC? The ampacity isn't violated , but that doesn't look right for wire that's around the width of a hair. I'm aware of the math behind it (5th year Electronics engineering student) and I can judge what voltage/current I need for X watts versus safety/other constraints, but wires being rated by current only hasn't bothered me up until this point. <Q> The wire will heat up because of its resistance and current passing through it, and this also makes a voltage drop over wire. <S> The potential of the wire itself is irrelevant, assuming the insulation does not bŕeak down. <S> 10mA flowing in a wire still heats up the same amount whether it comes from 1.5V battery or 1MV generator. <A> However, I've always thought about a wire's capacity by total power transmitted, not just the current. <S> That is indeed your prerogative. <S> You have every right as an independent human being to think how you like. <S> However, it's not useful to the rest of the engineering community, because the wire thermal limitation in amps is independent of the system voltage, and multiplication is easier to do than division <S> Of course, people want to shift power. <S> However, voltage for any given system tends to be standardised, at 120, or 240, or 48v, or 33kV. Working in a particular distribution system, if you want twice as much power, you need twice the current rating. <S> Easy peasy. <S> Once you've bought your reel of wire, there's nothing you can do about any of those terms. <S> A wire tends to have a minimum insulation thickness given by robustness, regardless of how low the rated voltage is. <S> Wires have a maximum current determined by their heating and cooling per unit length. <S> If you string a single wire, or run a bundle of wires in an insulated conduit, the thermal resistance per unit length will be radically different. <S> That's why the tables give different entries for single or bundled wires, and often different ratings depending on ambient temperature. <S> Imagine how it would be if as well as multiple columns for single versus bundled wires, the tables were of power, and had different columns for 12v, 120v, 240v, and then didn't have a column for your voltage. <A> Sure, with a higher voltage every wire can transmit more power at the rated current than with a lower voltage. <S> But how does this matter? <S> If you want to design a system and need some wiring there are probably two cases: You know your voltage and current already, because they are given parameters: Just pick a wire that can handle the necessary current (or to be more precise: A wire that is staying in the limits for voltage drop / power dissipation). <S> You want to transmit a specific maximal energy: <S> You would want to first choose a proper voltage for your design, that minimzes design costs (taking into account costs for voltage rating and cost for cable diameter). <S> You still would not want to pick a cable based on some calucated max. <S> power, but on the current you calculated with your design voltage. <S> Imagine having a power rating for a cable. <S> It would give you no information about the current handling capabilites. <S> You would have to also give the voltage, at which that power was calculated. <S> It is just easier to rate a cable <S> my maximum current and also give a voltage rating for the insulation. <S> That way, everybody can determine by themself what power he is able to transmit via this wire. <A> The amount of heat dissipated when you pass a current through a wire is given by I²R, where R is the resistance of the wire. <S> Notice that the heating goes up with the square of the current, and is completely unrelated to the supply voltage. <S> Suppose I have a piece of wire that the manufacturer has rated to 20A, with insulation good for 600V. <S> 600x20 = 12kW, so the manufacturer could claim that it is 12kW wire. <S> Now suppose I use that wire on a 220V supply, and try to use it to run a 12kW industrial heater. <S> The current is now 12000/220 = <S> 54.5A. <S> That's about 2.7 times the current that the wire was rated for, and so about 7.4 times the heat dissipated in the wire. <S> It's quite likely that the wire will overheat, and the insulation will melt. <S> It's much better to say that it is 20A wire, insulated to 600V, rather than give a wattage figure that is highly misleading. <A> However, I've always thought about a wire's capacity by total power transmitted, not just the current. <S> You should think the other way around : A wire should not be selected by total power transmitted, but by the total power NOT transmitted . <S> The total power NOT transmitted is defined by the (undesired) power dissipation in the wire ( \$I^2R\$ ). <A> Assuming arcs and corona discharges aren't a problem, could I transmit say 10 kW via a #40 AWG wire at 10mA, 1MV DC? <S> The ampacity isn't violated, but that doesn't look right for wire <S> that's around the width of a hair. <S> That's correct, you could transfer a high power through a thin wire and not violate the current rating of that wire. <S> However, the caveat you make " Assuming arcs and corona discharges aren't a problem " is putting the finger on the weak spot. <S> In order to make that happen, sufficient isolation is needed. <S> One part of that isolation is spacing such that arcing etc. does not happen. <S> So to transfer high power through a thin wire <S> you'd need <S> very thick and large isolation between the two thin wires.
Wires have a maximum voltage determined by their breakdown, which is a function of insulation thickness and quality, and wire diameter.
Why have both: BJT and FET transistors on IC output? This is the structure of FAN3100 gate driver IC: (taken from its datasheet ) As you can see - there are two ouput switches: CMOS and BJT. Why they put them both? <Q> Paragraph 2 of the description says: FAN3100 drivers incorporate MillerDrive TM architecture for the final output stage. <S> This bipolar-MOSFET combination provides high peak current during the Miller plateau stage of the MOSFET turn-on / turn-off process to minimize switching loss, while providing rail-to-rail voltage swing and reverse current capability. <S> At the bottom of page 14 in the section *MillerDrive Gate Drive Technology" it goes on to explain: <S> The purpose of the MillerDrive architecture is to speed up switching by providing the highest current during the Miller plateau region when the gate-drain capacitance of the MOSFET is being charged or dischared as part of the turn-on / turn-off precess. <S> For applications that have zero voltage switching during the MOSFET turn-on or turn-off interval the driver supplies high peak current for fast switching even though the Miller plateau is not present. <S> This situation often occurs in synchronous rectifier applications ecause <S> the body diode is generally conducting before the MOSFET is switched on. <S> The answer to " Who can tell me about Miller Plateau? " explains it thus: <S> When you look at the datasheet for a MOSFET, in the gate charge characteristic you will see a flat, horizontal portion. <S> That is the so-called Miller plateau. <S> When the device switches, the gate voltage is actually clamped to the plateau voltage and stays there until sufficient charge has been added/ removed for the device to switch. <S> It is useful in estimating the driving requirements, because it tells you the voltage of the plateau and the required charge to switch the device. <S> Thus, you can calculate the actual gate drive resistor, for a given switching time. <S> The BJTs are able to get the output moving while the MOSFETs are ramping up. <S> The MOSFETS can then provide the rail to rail voltage swing. <A> The CMOS and BJT output stages are combined to from one stage,the manufacturer calls this a "MillerDrive(tm)". <S> Why they do this is explained in the datasheet: <S> My guess is that they want to achieve a certain (output drive) performance that cannot be achieved by only using CMOS transistors or only using the NPNs with the manufacturing process that they're using for this chip. <S> The CMOS part helps pulling the output to GND and VDD, the NPNs cannot do that so well as there will always be a \$V_{CE,sat}\$ at the GND side and a \$V_{BE}\$ at the VDD side. <S> The NPNs are very likely able to deliver more current and will switch faster. <S> Such a process might be more expensive though. <A> Notice how the top NPN can only make the output reach VDD-0.7 V <S> , I assume it is the job of the mosfet to take care of the last 0.7 V. <S> It looks as if the BJT's are doing most of the grunt work and the mosfets are taking care of making the output reach VDD and a strong GND. <S> I could be wrong though.
This might be a consequence of the manufacturing process they're using as it is possible that in a different process the MOSFETs are so much better that similar performance could be achieved using CMOS only.
Calculating Junction Temp of Switching Regulator I was looking at TI web bench to find a regulator, my question is my calculation is correct and if it is why we have different value on webbench. The IC is TPS565208 P = (Vout - Vin)*Iout => (12-5)*4 = 28 For Switching Regulator P = (Vout - Vin)*Iout *(1- %Efficiency) => 28(1-%90) = 2.8 W Thermal Junction = 95.9C/W X 2.8W = 268.52C But let take look at Webbench which it said 80.42C Link to webbench (I dont know if it works) <Q> Your loss estimate <S> \$(V_{in}-V_{out})*I_{out}\$ is for a linear regulator, not for a switcher, and even then it doesn’t account for \$I_{q}\$ , the regulator internal current to power its own stuff. <S> The correct gross loss calculation is \$I_{in} V_{in} - I_{out} V_{out}\$ . <S> This is true for either a linear or a switcher, as it will capture <S> \$I_{q}\$ in the gross figure. <S> Working from there, the losses break down between the chip itself and components external to the chip. <S> A good portion of the loss is IR drop in the inductor and ESR loss in the filter capacitor. <S> Conveniently, WebBench has tabs with separate power dissipation figures for each major component. <S> They use the IC's figure to calculate \$T_{j}\$ , based on their estimate of overall thermal resistance to ambient ( \$θ_{ja}\$ ) based on their reference layout. <S> Ultimately, the sum of all the power dissipation figures is the gross loss, which should match the figure given by \$I_{in} V_{in} - I_{out} V_{out}\$ . <A> No, that's not right. <S> For a switching regulator, for a rough cut number, only the power in multiplied by the efficiency should be used. <S> \$P_{dissipated} = <S> V_{in}*I_{in}*{efficiency} <S> \$ <S> I think the confusion above is because your using the equation for a regular non-switching regulator. <S> The other problem is the efficiency is for the TPS565208 package and the inductor, so you'll need to find the power dissipated in both. <S> There should be a number telling you the power dissipation in the part, or I'd just believe the 80.4C given to you by webbench (they don't define the ambient temperature, which is important to know the junction temperature). <S> The problem is TI doesn't define all of their numbers very well so it can be confusing to know how they came up with the numbers. <S> TI is cheap, and you get what you pay for. <S> This would tell you if the numbers were close. <S> If we back calculate the junction temperature 80.4C/(60C <S> /W) we get 1.34W for the part from the webbench numbers. <S> It makes me wonder what conditions they used for testing or ambient. <S> The problem is not only do you need to get the heat out of the part, you need to get it out of the board and into air. <S> EDIT: <S> Apparently TI thinks that this package can handle 7W, if you take the max operating temp, 125C and divide it by the Junction-to-board characterization parameter (16.4C <S> /W), you get 7W for the part. <S> This seems like way too much power, maybe this part always runs very hot when you run current through it. <S> Be sure your PCB thermal design is very good! <S> To really verify this, I'd use TI's spice package or simulate it in a spice package of your choice. <A> The 95.9 C/W is the junction to ambient value. <S> The majority of the heat will be dissipated through the leads to the copper on the board, so the 16.4 C/W plus whatever the effective thermal resistance of the board is what will determine the temperature rise. <S> You and I don't know what the value is for the layout that TI is using, but it's bound to produce a far lower junction temperature that assuming the dissipation is all directly to ambient.
The problem that I have with this datasheet is the max power for the package is not specified.
Why wasn't interlaced CRT scanning done back and forth? I was studying the scanning of old CRT screens and the interlacing strategy for video, and I started wondering something. The raster scan process went top to bottom on odd lines, then back to top to raster the even lines. There is therefore a vertical blanking interval to send the electron beam back to the top position. Why wasn't the initial design of CRT vertical scan made so that the vertical scan happened top to bottom on odd lines, and bottom to top on even lines, thus negating the need for the vertical blanking? It would of course require that the signal of the even lines to be reversed. <Q> CRT interlacing was done to get the best balance between phosphor decay rate and refresh rate. <S> Each phosphor dot has, in effect, an intensity half-life which determines its decay rate. <S> Without interlacing the half-life would have to be in the order of 1/25 seconds (Europe) and this would have a noticeable flicker as this is on the edge of human flicker detection. <S> In addition the longer decay rate required would cause blur on picture motion. <S> By interlacing in the way we do each zone of the screen is updated every 1/50 seconds. <S> This reduces the flicker and allows a shorter decay phosphor to be used and this in turn reduces the motion blur. <S> To do as you suggest would result in a picture washing up and down the screen, alternating high and low intensity imaging at the top and bottom with reasonably even intensity in the middle. <S> Non-interlaced would probably be better and less trouble. <S> Wikipedia's Interlaced Video states: <S> Interlaced video (also known as Interlaced scan) is a technique for doubling the perceived frame rate of a video display without consuming extra bandwidth. <S> The interlaced signal contains two fields of a video frame captured at two different times. <S> This enhances motion perception to the viewer, and reduces flicker by taking advantage of the phi phenomenon. <S> The guys got it right when they interlaced it as they did. <S> Bonus: <S> See How a TV Works in Slow Motion by the Slow-Mo guys for some super analysis. <A> It's worse than Transistor suggests ... the scanning waveform was generated by simple analog circuitry, and was a segment of an exponential waveform not a perfectly linear sawtooth waveform. <S> So it would sag in the middle. <S> On a good TV it was reasonably linear, good enough for the errors not to be obvious. <S> However if the retrace also carried picture information, you would see double images because the sag would place the central line below the centre while scanning down, <S> ibut above it while scanning up. <S> It would be rather obvious that the two copies weren't in the same place; you would see double images in the central part of the screen. <S> TV had to work with imperfect circuitry. <S> When colour came along, even under the ideal conditions of identical scanning circuits in the same direction, it was a big enough headache getting all the colours to line up correctly. <S> Just mention a "convergence panel" to an old timer and watch him shudder. <S> It was a circuit board packed full of interacting adjustments... <A> Interesting but it would complicate electronics both on the camera and TV side, and only lines on the center of the screen is refreshed with equal time period and lines near top and bottom unevenly. <S> It just is simpler and looks better this way. <A> Motion pictures used 24 frames/second but did not have the decay issues: instead a mechanism moved to the next frame. <S> Even then, 24Hz would have been a bit flickery, so the projectors interrupted the light not just when switching frames but one additional time in the middle, making the flicker frequency 48Hz. <S> TV mimicked the motion picture in transferring full image data at a rate of 24Hz (rounded up to half of the frequency of the AC power network) while "flickering" at double the rate. <S> TV sets did not have any kind of storage (there is some delay line in color TVs but those came much later) <S> so it could not just repeat the same image it stored without the image getting broadcast again <S> (like 100Hz TV sets do it now). <S> Instead the data needed to be sent a second time and it made better sense to make use of the bandwidth to actually send image lines interlaced for a better match of horizontal and vertical resolution. <S> It's actually a trick of timing for vertical and horizontal blanking that creates the interlaced display: the TV set electronics are not particularly catering to it (and could equally well display non-interlaced) <S> , it's a consequence of how vertical and horizontal blanking pulses are getting interspersed. <A> You would have ended up with significant flicker for one since you wouldn't fill up the frame at the same rate on the entire screen. <S> There was PAL, SECAM, and variants of NTSC and PAL. <S> None of these would go top to bottom, then bottom to top. <S> If you did this, you'd end up drawing the entire bottom and top of the screen <S> and then it would be nearly 1/60th of a second before they are refreshed. <S> The center of the screen would be refreshed in 1/30th of a second on average. <S> You'd expect to see the worst flicker at the top and bottom of the frame as a result and the least in the center. <S> Fields in the display didn't only contain location information, but time information as well. <S> Interlace was basically a hack to fit in more information without excessive bandwidth. <S> You have to remember this standard was done in the mid 1950's. <S> Pretty impressive for it's time, and they did a remarkable job, which is now all absolutely obsolete. <A> The rapid retrace (from the yoke current reversal) produced the high voltage ( L di/dt) needed for the CRT. <S> L is the horizontal deflection coil inductance.
CRTs have phosphors that decay in intensity comparatively fast in order to support the display of moving images (oscilloscope tubes and text terminals tended to use considerably slower phosphors).
Power planes in PCB I'm designing my first PCB (two layer), the project is for an amplifier based on OPA2134. The design/amplifier requires to have +12V, GND and -12V.The power for the amplifier comes from a USB-C -> DCDC converter . I'm having trouble defining the plane(s). Should there only be a ground plane in the amplifier zone and then round +12V and -12V with wider tracks? Or should there be +12V power plane and -12V power plane and route the ground connections with wider tracks? <Q> I'm also a newbee for designing PCB, but the trace wide depends functionally on the amount of current going through the traces. <S> So in principle you should calculate it for both the 12V, -12V and GND traces what the minimum track width needs to be. <S> Since your amplifier is powered from USB (which is like 0.5A maximum I believe), I don't think you will need thick trace widths. <S> Some people use wider track widths for GND, VCC and other power traces, just for 'clearity', to see at a glance which are data traces and which are not. <S> About planes, for what I know most people advise to use a ground plane. <S> But if you can have a more clearer layout if you have a 12V and -12V power plane, I don't see any problem. <S> But there are experienced people here, maybe they have a better answer. <A> As the output current is limited to 40mA for a total supply current of about 44mA (when Iq is added) at 25C (which ultimately comes from the supplies) you could probably use reasonably wide tracks for power, provided they are decoupled as close as possible to the power pins. <S> A 10mm width track at 50mm length would incur a voltage drop of about 50 uV <S> (I used the Saturn PCB toolkit ) with negligible temperature rise. <A> Assuming a 4-layer board, I would have a ground plane and then split the power plane into +12 and -12 sections. <S> Bypass the planes to ground near the chips. <S> This answer has some screen captures of how it is done in Altium. <S> But for an audio amplifier you can probably do it fine with just one or two layers.
I would personally use a ground plane under the device for noise reasons and to ensure the supplies can be decoupled as close to the device as possible.
Astable 555 circuit not oscillating I'm a complete beginner to electronics, but I'm trying to follow Ben Eaters video series "Building an 8-bit computer". I tried to do the first part of an astable 555 timer, but the LED does not oscillate and on top of that the timer draws A LOT of current and heats up pretty fast. Does anyone have an idea what I did wrong and how? I'm using NE555P, 1uF capacitor, 5V from a rigged phone charger. <Q> This answer is a summary of existing good answers plus various comments. <S> The OP supplied a good image and schematic. <S> Several issues stand out or were a potential past problem. <S> Breadboards are known for odd behavior, however this circuit should be stable with just a 4.7 µF capacitor across the 555 power and ground pins. <S> Inputs should NEVER be left floating. <S> As Marcus mentioned in his answer the active low reset pin should be tied to Vcc for stable operation. <S> As Sunnyskyguy mentioned in his answer it is very possible the LED was inserted backward. <S> If so it may or may not have been damaged. <S> Replace it when possible. <S> If so consider it toast and try a new one. <S> Reverse polarity can damage most any IC and cause it to get very hot even with no load connected. <S> While it is not mandatory, inserting a 10 nf cap from the control pin to ground helps the 555 reject noise on the Vcc line. <S> It is good practice to route ALL ground connections first, then power, then inputs, then outputs. <S> Much better chance of getting connections right the first time, and having even complex boards work right the first time. <S> Plug in your ICs last after testing your power feeds with a DVM. <S> Do <S> NOT bend LED or other component leads close to the body of the part, as this can cause internal stress and damage. <S> Use needle-nose pliers to create a 1/16th inch minimum gap before the bend. <S> I would replace the LED and make sure the cathode goes to ground. <S> Use a new 555 timer and please pay attention to component orientation. <S> Add the extra capacitors mentioned for stability. <S> This is a simple 555 timer IC. <S> Pay attention to details and it should work just fine. <A> As has been mentioned, Pin 4 should be connected to +V and also pin 5 decoupled to ground by about 10nF. <S> The 555 should not get hot at all! <S> This is the big clue. <S> I've played with this circuit and I found that you could blow the 555 easily by accidentally reversing the power supply. <S> Did you do this or plug in the 555 <S> the wrong way round at some point? <S> It's in the right way now. <A> The LED flat edge cathode is not towards gnd. <S> So it is backwards. <S> the leads are also stressed beyond recommended in spec. <A> You didn't connect the inverted RESET pin. <S> To cite TI's NE555 datasheet : To prevent false triggering, when RESET is not used, it should be connected to VCC .
Peter Jennings mentioned that you may have inserted the 555 IC backwards initially or had Vcc and gnd reversed at the power connector.
Need help in cable temperature calculations I am trying to calculate how hot will a 0.6m long 28AWG wire carrying a load of 12VDC, 0.2A get. I understand that are other factors such as environment cooling rate, thermal resistance between air and the cable etc. Ampacity values are not really relevant to my scenario as the cable is in contact with the human body. Thus, I am more concerned about whether if the user can detect the change in the wire's temperature. I also do not have the resources nor the proper apparatus to conduct an accurate measurement test. An aluminium core 28AWG wire has a resistance of 0.32716 Ω/m.Power dissipation: $$P=I^2R$$ $$P=0.2A^2\times0.32716Ω \times 0.6m$$ $$P=7.851mW$$ Found this equation here , altough it is only meant for radiative heat loss $$ \dot{Q}_{12} = \epsilon A\left ( \sigma T_1^4 - \sigma T_2^4\right )$$ Based on the above, I got a value of \$309K\$ which means the temperature increase is about \$4°C\$ from an ambient temp of \$305K\$ . Is this an accurate reference? Basically, I want to know if a 28AWG wire will stay cool during operation or do I need to select a lower gauge wire. <Q> 7 milliWatts spread over 60 centimeters will not be detected. <S> The heat will be dumped into the chest, and cooled by the blood. <S> Sunlight is 1,000 watts per square meter, or 1,000 watts per 10,000 square cm. <S> We easily sense sunlight, which is 0.1 watts per square cm. <S> Your heat density is 0.1 milliWatts per cm length or about 0.1mW per 1cm*0.2cm(assuming the insulation spreads out the heat) or about 0.5mw per cm squared. <S> Thus your heat flux into the skin, thru the wire's insulation, is 200X smaller than the sun's flux. <A> What do you mean by a wire? <S> Is it an electrical wire, with an insulation? <S> As you have already said, the temperature will depend on the environment (including air flow), surface area of wire and insulation. <S> Additionally, it will also depend on biological factors like blood flow, electrolyte status, perspiration etc. <S> For biological experiments, keep in mind that human pain perception for aluminum conductors is just above 42C at the chest (most sensitive) for healthy person. <S> Pain perception is measured at epidermis/dermis interface. <S> Make sure your subject is not hyper-sensitive to pain, e.g. due to impaired hormone status, or you might end up with an emergency situation. <S> As you can see, there is no simple answer, especially at the extreme limits of accuracy where you are calculating. <S> (Additionally, your reference is not room temperature at 305K, it will be skin temperature prior to measurement). <A> Radiation is a negligible component of heat loss for a wire close to room temperature in air. <S> It becomes a factor for long wires in a vacuum, and high temperature differences, but that's not the case here. <S> If it's in contact with skin, conduction will be a big factor. <S> In air, convection is the primary factor. <S> Convection calculations involve fluid dynamics so they're not as straightforward as you might hope for. <S> But to get a rough estimate, consider this graph from this website: <S> Very roughly the temperature rise in still air is about 3.3 \$\times <S> \text I^2\$ , so for a current of 200mA in an AWG 28 wire you could expect about a 0.15K rise. <S> If it's in contact with a 'bag of mostly water' the conduction to the skin (and therefore the skin temperature) will have a large effect.
Convection is the primary source of heat loss for a long wire in still (or moving) air.
Why does capacitance not depend on the material of the plates? As a student, learning about a capacitor after understanding what a resistor is, it was quite surprising to note that the capacitance does not depend on the nature of the plates used, at least in any type of capacitor I have known. I am guided, "it makes no difference as long as the plates are conducting." Is that true? <Q> Yes that is true, capacitance is: \$C = <S> \frac <S> q <S> V\$ <S> where q is the charge and V <S> the voltage between the plates. <S> As long as the charge \$q\$ can be "hold in place" this relation applies. <S> I mean, there is no need to have a "good" conductor as the charge <S> is static , it does not move. <S> So as long as for a certain voltage \$V\$ is applied resulting in a certain charge \$q\$ to be present on the capacitor's plates <S> then \$C\$ can be determined. <S> It does not matter if the plates are bad conductors (high resistance) as it will then simply take longer for all charge to reach its final location. <S> Only if you look at the dynamic behavior of a capacitor (how does it respond to quick voltage changes) <S> would you see an influence of the conductivity of the plates. <S> In first order the capacitor would exhibit additional series resistance . <A> The active part of a capacitor is the dielectric. <S> That's where the energy is stored, that's what the voltage is developed across. <S> The plates just transport current to the right places. <S> A high resistance here could make the capacitor lossy, but will not change the capacitance. <S> In much the same way, the resistance of a resistor depends on the material and geometry of the resistive part, not the leads. <S> The active part of an inductor is the iron, ferrite or air-space within the coils, because that's where the energy is stored. <S> High resistance wires will make the inductor lossy, but won't change the inductance. <A> Consider that (very roughly) <S> \$N_A <S> = 6×10^{23}\$ , while \$C = 6×10^{18}e\$ , so 1 mol of metal has enough charge carriers for 100000 C, assuming one mobile electron per atom. <S> In a capacitor of 1000μF at 100V with Aluminium plates, only 27μg of Aluminium atoms have to donate/accept a single electron to hold the charge, the rest of the atoms stay neutral. <S> Assuming the plates weight 5g, that's 99,9995% of neutral atoms plus 0,0005% of atoms missing one electron. <S> Clearly, a typical capacitor will fail due to breakdown long before the lack of charge carriers in the plates will become apparent. <S> Things change in semiconductors, where the amount of free carriers is much smaller and depends on the doping. <S> Even then, it's often easier to calculate the capacitance as a static approximation, assuming that the plates stay perfectly conductive and only the distance between them changes as the depletion region grows. <S> It's not always possible though: in fast dynamic processes junction capacitance can only be adequately described using equations for charge flow ( <S> e.g. this one ), and <S> the solutions indeed depend on the material of the plates. <A> To the best of my knowledge the chose of material DOES matter - even for the static case. <S> If not it would imply that most insulators could be used as electrode as well due to the residual chance of existance of charge carriers within it. <S> Some reasoning and scientific works why choice of electrode materials matter: DOI: 10.1109/16.753713 and doi.org/10.1063/1.1713297 to name just a few. <S> Thing is that the models you learn are a good approximation. <S> Not more not less. <S> Main reason why the electrode material matters is that the EM field reachs into conductors as well even for static case. <S> LT;DR know your model's limits: It does matter but can be often neglected. <A> It's the same for an inductor - the value of inductance remains constant irrespective of the wire's conductivity. <S> Take it to extremes and consider the speed of radio waves and how they propagate through space. <S> The impedance of free space is determined by the permeability and the permittivity of free space and these are measured in henries per metre and farads per meter respectively. <S> Yet there are no conductors in free space. <A> In a typical capacitor, charges will be concentrated in thin layers on the portions of each electrode that are nearest the oppositely-charged electrode. <S> Although this layer essentially always has non-zero thickness, and the distance between each charged particle and the surface will affect the potential difference resulting from a that charge, in practice the effect is almost always small enough to be dwarfed by measurement uncertainties or other confounding effects. <A> Many practical capacitors have very weak dependence on the conductor material. <S> Capacitor Equivalent Series Resistance (ESR) will be affected by plate material and thickness/routing and is a significant limiting factor in power applications. <S> This also affects peak discharge currents for pulsed applications. <S> On a practical level, many power film capacitors have fusible links in the metallization so that failed portions of the capacitor are removed from circuit (and capacitance drops). <S> This is a major practical consideration tied to the capacitor plate.
Typical capacitor plates are made of conductors (metals) which have a huge number of charge carriers. In the final state there will be no difference compared to a capacitor with well conducting plates as the amount of charge will be the same.
Does a silkscreen text or line over pads/mounting holes give problems? I'm intending to create a PCB and let it manufactured (just a few, for a hobby project). I added some text on the silkscreen layer very close to mounting holes/holes for pins/vias.Also I used lines over such holes. Could this cause a problem for a PCB manufacturer? I expect the text/lines will just not be displayed, or will my PCB be rejected because of this? I'm using KiCad for the design. Below is the picture... meanwhile I fixed the texts, but the diagonal lines at the bottom through some pads and holes I would rather keep. <Q> Usually they just mask/remove these problematic parts (at least at eurocircuits), but you should clear it with your PCB supplier or simply fix it. <A> KiCad has no check for silkscreen overlapping exposed copper. <S> But you can select "exclude pads from silkscreen" (formally known as "remove mask from silkscreen") during gerber export to ensure no silk is where it does not belong. <A> Most full-service pcb fabricators will have a CAM department. <S> These engineers normally clip off silkscreen which falls on holes or solderable surfaces. <S> If they don't do that, you might encounter1) <S> Ink in holes, that might create issues while fitting a part/pin.2) <S> Ink on solderable surface, which might result in bad solder joints. <S> If it's just silkscreen outlines and unnecessary stuff, most CAM engineers clip them without putting the project on hold, but if it's text or polarity markings, they might put it on hold for verification. <S> If they do, you'll be fine.
For your board, just make sure which whoever is going to fabricate your board, if they would clip silkscreen with respect to soldermask/holes.
What is initial contact resistance in relays? I am reading datasheet of a relay. I see a term, initial contact resistance in relays. What is the definition of this parameter? Thanks. <Q> Initial contact resistance is the resistance of the contact when the relay is new. <S> As the relay ages, switching significant current, the contact resistance can be expected to increase. <A> I see a term, initial contact resistance in relays <S> I read "Performance (at initial value)" <S> I reckon it means at the beginning of its life cycle. <S> In other words it may deteriorate over time. <S> Given the state of the data sheet and the unclear definition of original manufacturer I would be looking for a more reputably sourced and documented part. <A> However, it is somewhat inaccurate to say that the contact resistance will increase with time. <S> Depending on applied contact pressure and the the current, the welding of asperity localised patches of true contact will cause the area of true metal to metal contact to grow, resulting in a reduction of electrical contact resistance as shown by the results in resistance relaxation curves here . <S> Higher contact pressure and current too will translate also to lower ECR as shown here . <S> It should be noted that over extended periods of time, mechanical attrition and chemical corrosion of contacts may reverse the trend of decreasing electrical contact resistance.
As indicated by answers above, the initial contact resistance refers to the contact resistance at time zero .
Is DC heating faster than AC heating? If I have a 120V @ 50Hz AC heater rated at 750W, and I run it using 120V DC, will the DC heat it faster, and why? (Assuming I could get a clean 120V DC power source). <Q> When you say 120V @ <S> 50Hz AC you are implicitly saying 120Vrms. <S> The RMS voltage is qualitatively defined as the voltage which will give the same resistive heating (averaged out over time) as a DC voltage of the same number. <S> Therefore, by the definition of RMS, the heating will be the same because the RMS voltages are the same. <S> If you said 120Vpeak or something different, then things would be different. <S> This is in reference to a heater modeled only as a resistor. <S> No extra real-world components like motors for fans. <A> Theoretically it should be no difference at all. <A> For heating, the only thing that matters is active power. <S> If the load is a resistor, active power is \$ <S> V^2/R \$ with V being the RMS voltage . <S> The RMS value of 120V DC is 120V RMS, which will give the same active power in a resistor than 120V AC RMS. <S> AC power will be pulsed, but a 750W resistor will have enough thermal mass to smooth it out, so there is no difference. <A> No... <S> On a purely theoretical level, the resistor should get to working temperature <S> a (really) tiny bit faster with DC power. <S> The reason is the Stefan–Boltzmann law which state that the radiative output is proportional to the fourth power of the temperature. <S> During the beginning of the heating phase, during the power peak of the AC current, the resistor will get hotter than it's DC counterpart at the same time. <S> But it will loose much more energy during the "low power" part of the AC cycle. <S> Fourth power is a really step function. <S> The effect will be really small, because the ripple in the temperature will be microscopic 1/100th of second is infinitesimally small compared to the time constant of the resistor (typically 10th of seconds), but that doesn't means that it doesn't exist. <S> But... this is only about the temperature of the resistor itself. <S> Two reason to this : The first one is that, as stated earlier, instantaneous power dissipation will be higher on average. <S> The second is that real resistors are not perfect and generally act as Negative Temperature Coefficient resistors. <S> So, by being slightly cooler, the AC one will have lower resistance and drain more current, so more power. <S> But keep in mind that this calculation is purely theoretical and that in real live, even with perfectly equivalent power source, the effect will certainly not be measurable.
In fact, the thermal energy dissipated in the room during the heating time will be higher. No, because 120V RMS is the AC voltage that produces the same heat as 120 VDC.
12V lead acid charger with LM317 not charging I have this circuit built and tested. I think this should work and charge the battery with 0.4A to about 13.5V. The Problem: it doesn't charge and between U1 and U3 (Battery) is a current of about 0.03A, but why? The LM2596 is a Buck converter Breakout board and that works flawlessly.R1 is a 4W resistor so that won't be the problem Why doesn't it charge? simulate this circuit – Schematic created using CircuitLab <Q> Read the datasheet of the LM317 , on page 9 it states: <S> So when you feed the LM317 14 V it can regulate to 11 V and lower, not 13.5 V. <S> Also there will be 1.25 V across R1 <S> so for 13.5 V you will need to put at least 13.5 + 1.25 + 3 = 17.75 V into the LM317. <S> The <S> ~15 V you're feeding <S> the LM2596 board isn't even enough, there's <S> no need to have that LM2596 converter in place <S> so remove it. <S> You will need a power source with a higher voltage than ~ 15 V. <S> As the LM317 will drop 3 V or more at a significant current, it will get hot so use a heatsink! <S> If the LM317 gets too hot it lowers the current to lower its power dissipation (and allow it to cool down). <S> Note that your circuit does not have a well defined "stop charging" voltage, current will keep flowing and your battery might over charge! <S> I have built an LM317 based battery charger for my 12 V car battery. <S> I use a 19 V laptop power supply I had lying around to power it. <S> In that design I do <S> not use the LM317 as a current source <S> , instead I use it as a voltage regulator set to 13.5 V. <S> Then when the battery has a lower voltage, the LM317 will hit its build-in current limit (< 2.2 A). <S> For a car battery 2.2 A or less is fine. <S> As the battery charges an the voltage reaches 13.5 V, the current gets smaller and smaller until only a leakage current is left. <S> If that 2.2 A is too much for your battery, use this circuit instead: <A> You need more than 14V at input if you want to charge a battery to 13.5V. About 18V should be enough. <S> LM317 needs about 3V between Vin and Vout to work. <S> And there will be 1.25V between Vout and Adj. <S> And then the battery will have 13.5V. 13.5+1.25V+3V is 17.75V. <A> The LM317 requires a difference of 3V between the input and output voltage. <S> You have 15V from your powersupply (which you drop to 14 with the LM2596.) <S> That's only 1.5V above the output voltage (13.5V) that you need for the battery. <S> You really only have two choices: <S> Use a different current regulator that can work with the available voltage. <S> In any case, the LM2596 is only making things worse. <A> Also, if the power supply is switched off and the 12V battery is still attached, there's a negative voltage difference (Vin-Vout) on the LM317, a situation not mentioned in the data sheet as far as I can see. <S> If that causes a leak current, it would drain the battery.
The problem you have is caused by the LM317 and the voltage you have available to it. Use a higher input voltage (at least 17V)
Can I add 2200µF capacitor to regulator 5V? Can I add a extra capacitor ceramic/electronic 2200 µF (actually working perfect from USB port laptop) for the test circuit for L7805 SMD? I need for peak current 2A SIM800C module. I don't know why they recommend 0.33 µF and 0.1 µF. Maybe somebody has a short answer. I didn't include the complete schematic, but the system needs to work in 5V and SIM800C is fed by a regulator 3.8V. <Q> You can add the capacitor if you don't exceed the max power dissipation of the L7805. <S> The L7805 is current limited, which means the low impedance of the capacitor won't kill it initially, if the temperature doesn't get too high in the part. <S> This will depend on the input voltage. <S> Because typically a big capacitor will cause problems for the load. <S> The L7805 can't source current instantly, the max short circuit peak current is 2.2A, so this is a source resistance of roughly 2.2Ω. <S> This means the time constant with a 2200uF cap is 5 milliseconds which may be a problem if it takes that long to get to 60% of it's nominal voltage on startup for some applications. <S> It also means that the regulator needs to source <S> ~2.2A (decreasing exponentially) while the capacitor charges, this may be too much power dissipation if there is a large voltage drop across the L7805. <A> Yes, you can add the capacitor. <S> If you have a large transient (like a momentary 2A draw), the capacitor will help even out the dip from the regulator as it corrects. <S> Since you are probably operating beyond the maximum for the 7805 I would say "designer beware" especially if you have a long transient draw. <S> The regulator is probably going to get pretty hot. <S> The input capacitor is also not strictly required, but it is recommended if the input side is a long length from the input filter, or if you have a large output capacitance . <S> The minimum suggested is 0.33uF, but in your case you may want to increase this. <S> It should also be a tantalum or mylar cap with low impedance at high frequencies (as per the data sheet). <A> I need for peak current 2A SIM800C module. <S> Your 7805 seems to be rated for less than 2A. <S> I want to use two lithium 18560 <S> You're battery powered, so power efficiency should be of importance to you: <S> The 7805, as a linear regulator, is undesirable because it converts the complete voltage drop from 2·3.7V = 7.4V to 5V at 2A to heat <S> – that's a waste of 2.4V · 2A = 4.8W; or: 40% of what you use to power the device in that situation. <S> Thus, any linear regulator like the 7805 will get hot. <S> Use a switch-mode power supply instead. <S> 2200µF capacitors aren't free, so for the price difference, you could buy a cheap switching regulator that wastes a lot less power and could sustain the 2A. <A> Some LDOs will overheat, as they try to charge up a large capacitor, and then SHUTDOWN for a period of time <S> so the silicon die can cool down.
The datasheet specifically says that the output capacitor is not required, but it does improve transient response. So, pick a voltage regulator designed for the current you need.
Multiple rails I2C bus device from PMIC I want to use a PMIC chip in my project. As any PMIC, it has a couple of outputs: buck Vout, and an LDO out. The PMIC is operated over the I2C bus. The MCU acts as the host device and runs on Vout, so I have the I2C lines from my MCU pulled up to Vout. The LDO output is controllable, in the interest of saving power and not losing quiescent current, I would like to use the LDO output to power another slave device. Here is a block diagram of how I would like to connect the MCU and slave device: This slave device also operates over I2C. I only have one I2C bus, so I can only connect the device on that bus. Assuming the voltage levels are the same on Vout and VLDO, I am not sure the configuration I have will completely turn off the device, since the lines are still remain pulled up to Vout. Is this the right way to implement it? If not, how can I correctly configure it? In case my LDO and Buck outputs are not the same, how can this be connected? <Q> The letter of the I2C spec says that the powered-down device is not supposed to drag down the I2C lines. <S> In practice, this may not be the case, depending on how the chip designer implemented their I2C I/O pads. <S> I2C is supposed to be open-drain. <S> However, sometimes designers will use a 'pseudo-open-drain', that is, a regular I/O pad that's wired to use output enable to make low / high-Z to form the I2C signal. <S> The problem is, regular I/ <S> O pads have protection diodes that will kill the bus when the device is powered off. <S> Here's a clue: if the part has a Vi(h) spec on the I2C pads of, say, VccIO+0.5V, or they have no special statement about I2C vs. power-off , chances are there are protection diodes on the I2C pads that will be forward-biased by I2C on power-down. <S> The safest thing to do is to put an isolation switch between the powered-down I2C section and the main section, and disconnect the I2C domains from each other when the LDO is powered off. <S> You have options for the disconnect. <S> You can use a level shifter with an enable, a signal switch (e.g., USB 2:1 mux like this one: http://www.ti.com/lit/ds/symlink/ts3usb221a.pdf ) or even a pair of N-channel FETs for the cheapest solution. <S> If you can have a separate I2C bus for that peripheral and the PMIC, even better. <A> You have valid concerns. <S> You may or may not have a problem of back-powering the slave device when Vout is active but Vldo is turned off. <S> Most likely the slave device was designed to work properly in a situation like this and not allow the presence of voltage on the I2C lines create problems when the device is not powered. <S> However, this may not be the case. <S> Check the slave device's documentation. <S> If you have different voltages for Vout and Vldo, then the best thing to do is to add an I2C level shifter in between. <S> I have a couple of good news for you: the circuit is extremely simple and it should also solve your back-powering problem when Vldo is turned off. <S> You can find information on how to implement the level shifter in this Sparkfun page or in this Philips application note . <A> Well, I think there are two main options here: one is to bit-bang one I2C of the I2C interfaces, the other is to split the bus in some way with a part that can isolate the slave device you want to switch off.
This could probably be done with a simple CMOS bus switch or an I2C level shifter.
Reasons for strain gauge drift I'm testing a load cell where the result drifts over time. By weighing it down with a known weight over night, the result crept up by 10% in 18 hours. It's a completely linear increase, with consistent variations due to noise. I'm using a HBM QuantumX MX840B and two pairs of strain gauges that form a full bridge. The cable has been perfectly still during the test. The strain gauges are 120 ohm and it's been set at a 5V exitation voltage. The load cell is of stainless steel and is quite massive, so I would assume heat dissipation shouldn't be too big of an issue, but I don't know. Temperature is reasonably stable (it's indoor) and the full wheatstrone bridge would compensate for change, something we also tested with a heat gun. I'll re-mount the strain gauges on the load cell and see if it gets better, but I'm not sure as to why it drifts, and especially in such a linear way, and advice would be helpful. Edit: This is what I'm going to do in order to re-mount it properly, please give me input as to where your experience differs from this. Physical installation of strain gauge: Remove all strain gauges Clean with isopropanol Prepare the area with rough grinder if need be Sand down to 320 grit with electric sander or grinder flappy disks Slightly roughen the texture with a courser paper to increase adhesion Clean with isopropanol again Put a small amount of HBM Z70 cyanoacrylate glue Push the strain gauge down with my thumb Cover with suitable material (silicone or similar) Results from load cell overnight testing: This is from testing the first cell over night, it dips considerably. This is from the second cell, it has a much smaller variation. It makes a huge dip in the morning, something we have seen before. This is because the test lab has eight huge halogen lamps in the ceiling that draws a big current when starting, I'm certain those affect the results. What is concerning is that it does not recover back to it's previous state. <Q> Mounting strain gauges is an art. <S> Your long term variation seems to me that the gauges are mounted in a way that the adhesive is not keeping them in the same position over an extended period of time. <S> At least this is what I would consider after making sure to rule out temperature and voltage variations. <A> Could be creep. <S> https://en.wikipedia.org/wiki/Creep_(deformation) . <S> Could be drift of your excitation voltage. <A> I‘d suspect the type of glue. <S> We use M-Bond 610, which is a special two-component hot glue. <S> This also needs conditioning and neutralizing with the proper chemicals, after sanding just the right way. <S> Applying strain gauges really is an art and requires careful process design and loads of know-how. <S> If you need professional help, contact me. <S> The company I work at specialises in one-off and custom designed load cells for new installations and/or replacement of unobtainable parts. <S> We would be happy to help! <S> Edit: We can laser-weld for IP68 rating as well.
Might be heating due to current, though the full bridge should mostly null that out.
Please can someone give their opinion on my Cat water Dispenser circuit? I'm trying to create an automatic water pump for my dear old pussy cat (I'm using her as an excuse, really I wanted a project of some sort) I have the following Circuit: R1 is the water itselfS1 is the water being conductive and completing the circuit through the water When power is applied to the circuit all the opAmps are LOW, the MOSFET is allowing current to flow from Source to Drain and the motor/pump spins. When a connection is made i.e the water from the pump fills the bowl to a certain level (R1/S1), 7mv goes to the non-inverting A input of the opAmp. This saturates the output (5v) due to open-loop gain, charges the Cap at C1 and also saturates the B output of the opAmp, shutting off the MOSFET. When the connection is lost i.e the cat has a drink and the water level drops. The A opAmp goes low and C1 is allowed to drain through the 15k Resistor (R5). After the cap is drained (~seconds) the motor spins, the bowl fills up. Eventually I am going to have an RFID tag on her collar which will turn on a relay to provide power to all this, but thats phase 2. I know I would be better of using proper comparators, but this opAmp suffices. Like I said, this circuit is working as I expect, but are there any recommendations from anyone at all? <Q> When there is no water (the switch is open) both inputs of the first op amp will be very close to ground. <S> What makes you think the op amp output will be low in this case? <A> The LM358 will work from 5V as long as neither input exceeds about 3.5V - <S> so that's OK. <S> The FET gate should be pulled up to 5V as the opamp does not pull well to positive rail. <S> (It's a vaguely rail to rail output amplifier with a little help). <S> The immensely overspecified IRF9540 MOSFET <S> (grunty is fine if it works and you have one) <S> has a turn on voltage nominally of 4V and more in practice usually - so should be marginal. <S> If it works it may have lower than average Vgsth. <S> Neither opamp is biased to act as a comparator reliably - you may have been lucky with input offset voltages. <S> SO - the following will make the circuit "look correct". <S> If it works now it's uncertain if correcting it will make it better or worse, but ... <S> Add a small bias voltage to the (now) non-inverting inputs - say a 1k to ground and 220k to V+ (5V) and connect the midpoint to both non-inverting inputs and add a small capacitor to ground. <S> (0.1 uF <S> say)(or larger). <S> MAYBE make R2 larger - tbd. <S> A lower VGsth FET is recommended but as a one off if it works it works. <S> C2 probably not needed. <S> Connect R9 from opamp B <S> output t V+ (5V) and increase to 10kish. <S> A similar circuit based around a hex inverter CMOS Schmitt trigger suh as CD40106 / 74C14 / ... could be designed to have a quiescent current of microamps - 4 x AA Alkaline cells would last the shelf life of the battery <S> - maybe 5+ years if the cat is not too thirsty. <S> Water pulls input up as now. <S> Pull down in the megOhm range. <S> One inverter section follows to get polarity right and drive FET. <S> 4 spare inverters to make a siren or flashing light or whatever :-)eg an alarm that flashed a light if water pump did not restore water level within X seconds. <A> A few things to augment existing suggestions: 1) tests what happens if there's a radio transmitter such as a phone or wifi network device ON and active somewhere near 2) <S> some security measures to stop the system and alarm if the pump starts to consume exessive current or the water cup isn't changed to a clean one in 48 hours <S> the water level doesn't rise fast enough (already suggested)
The LM358 is not a rail-to-rail op amp so it won't work in a low voltage circuit like this.
Do I need to limit the current on the inputs of a parallel load shift register? I have a Parallel Load Shift register ( SN74HC165 ) When connecting the parallel inputs, do I need to add a resistor to limit the current? And if so, can I use one resistor for all 8 Inputs in total? This is my intended circuit: simulate this circuit – Schematic created using CircuitLab <Q> Otherwise your input is floating when the switch is open. <S> May I know why it's unnecessary? <S> Does the HC165 have an internal resistor or are there other factors <S> I'm missing? <S> The HC inputs are very high impedance ( <S> Which means they have internally a very high resistance). <S> They take hardly any current at all. <S> In effect if you take the TI data sheet it specifies the input current when connected to VCC as a few uA. <S> That is also the reason why you should not leave an input unconnected. <S> It needs only very, very little current (read: energy) to switch. <S> It could already switch on the 50/60 electric field from a nearby power rail. <S> As to pull up and pull down and switches have a look at this post which shows you how to use a switch with either. <A> Best practice would be to have a pullup resistor on the '165 input pin, and let the relay connect the input pin to Gnd when closed. <S> That way there is no chance of shorting the power supply to Gnd when the relay is energized. <S> Each pin needs its own resistor. <S> HC165 has very little current load, 10K would be sufficient (it's what I use for pullup resistors). <A> Per the datasheet, all unused inputs require being pulled up to Vcc or pulled down to GND (it's a CMOS device). <S> Using a 100K-Ohm. <S> You want to pull the voltage level, not blow through a lot of current. <S> For any other signal a pull-up or pull-down is only used to ensure a pin is at a given state, when not being influenced by another signal (other than noise). <S> It's purpose is to be strong enough to overcome noise, but not so strong it overrides a valid signal. <S> A valid signal should always override the pull-up/down as necessary. <S> And that is how you calculate the size of your pull-up/down resistors. <S> Not just a random 'Gee, someone said a 10K or a 4.7K'. <S> I'm only mentioning that because noobs rarely get told how to choose a component, and that is part of the engineering process. <S> Beyond that, the only reason you use a resistor is to limit current. <S> Use only what you need based on a) signal purpose, b) noise/interferences, and c) cosideration of junction temperatures. <S> Speaking of such limits- pay attention to the maximum current allowed through your preferred shift register.
No, but you should add a pull-down resistor. Using only what current is necessary (always in your designs) is a good habit because it allows you to do more in some cases when current limits exist.
Confusion about TVS-wiring with RS-485 I'm currently looking into securing my RS-485 connections against ESD. My current understanding is that a TVS diode is put between the data line and GND, so that any discharge will be shorted to ground immediately, without going through other devices (sometimes also against +5V [>USB], not yet sure how that is supposed to work out or where to look it up). Now I've found some sources that also put a TVS diode between the two data lines( Protecting RS-485 by TI ), some that don't and one even omits the ground TVS completely ( TI again ). So apparently I don't completely understand how that's supposed to work and what the TVS between the two data lines tries to accomplish and I seek enlightenment so that I may not fry my circuit by accident. The first document explains it in a way I don't completely understand and the second one doesn't explain the reason at all (or I'm too blind to find it). <Q> The TVS between each line to ground limits how far away each line can get away from ground <S> (it clamps the common mode voltage). <S> In a way, the TVS from each line to ground does indirectly prevent each line from getting too far away from each other. <S> But if the common mode limit of the lines is larger than the differential limit, then TVS diodes which are sized to limit the line to ground difference won't protect against exceeding the differential limit. <A> If an ESD event happens on RS485 cables, the best thing to do would be to shunt it to earth, which in most designs is through the PCB ground, then chassis ground. <S> Because RS485 is differential, if there are TVS diodes to ground on both lines then then any common mode noise will be subtracted out. <S> The diagram below shows a configuration for TVS diodes to ground. <S> Source: <S> https://www.onsemi.com/pub/Collateral/AND8229-D.PDF <S> In noisy environments, it may be best to use shielded RS485. <A>  I don't completely understand how that's supposed to work and what the TVS between the two data lines tries to accomplish <S> The TVS diodes conduct in reverse bias condition and only when the voltage across them crosses certain threshold (10V, 18V anything, depending on part number). <S> The voltage rating should be definitely higher than the maximum expected voltage on the bus (across the signal lines). <S> If the relative voltage of the signal lines is less than the voltage rating of the diode, the TVS will be in non conducting state.
The TVS between the lines prevents the lines from getting too far away from each other (it clamps the differential voltage). The goal of any ESD design is to shunt the unwanted current back to the source or to earth.
Ampacity of Conductive Tape How do I measure ampacity of conductive tape? I want to replace a wired battery to equipment connection with copper adhesive tape because I need a very low profile on the connection. The maximum current will be 200mA, max voltage 14V. Would standard copper adhesive tape be capable of carrying this current? Length of connection will be 4cm, width of copper tape track will be 1cm. Like this one for example: https://ie.farnell.com/3m/1181-12mm/foil-shielding-tape-adhesive/dp/1653450?st=copper%20foil%20tape <Q> Your tape's datasheet mentions a resistance of 5 mOhms per square. <S> This means a square of any dimension will have a resistance of 5 mOhms between its opposite sides. <S> Since it is 12.7mm (half inch) <S> wide, a length of 50mm (2 inches) is 4 squares, each a half inch. <S> Thus the resistance of 50mm of tape is 20 mOhms. <S> This will be fine for 200mA, the tape will dissipate less than 1mW). <S> Note you can get this tape for much much cheaper on ebay. <S> Try "slug tape" or "guitar pickup shield tape" or just "adhesive copper tape". <A> The thickness is 0.0026in and 0.5in wide. <S> This means it has a cross sectional area of 0.00115in^2 which corresponds to AWG18 or AWG19 wire. <S> So if the resistance is the same (which it should be pretty close if both are made from copper) then it would be acceptable to compare the cross sectional area. <S> Powerstream says 14A is the max, but it really depends on how high temp is acceptable for your application. <A> A 12.7x0.04mm, 0.508mm2 copper section, should be similar to a 20 AWG cable, 0.033 \$\Omega\$ /m, with a 5.0 A Ampacity (NFPA tables, <S> 90 deg., Single Conductor, Insulated Ref.3 ). <S> Assuming the 12.7x0.026mm, 0.3302mm2 <S> acrylic conductive section have less than 50% conductivity of copper, we could safely assume the tape is similar to a 19 AWG cable, with a 6.0A Ampacity (Interpolated, NFPA tables). <S> As reference, the NFPA ampacities are more conservative than the NEC ampacities, i.e. a 12 AWG cable has 20 A Ampacity (NFPA) vs 30 A (NEC) for 90 deg. <S> Remember those ampacities are for single conductor, insulated. <S> In this case the conductor is "half insulated", which should give an additional margin. <A> 200mA is nothing. <S> Should be fine. <S> But if you really want to measure ampacity then run enough current through it until it the adhesive gives out, something melts or burns, or is too hot for your liking. <S> Whichever happens first. <S> That's really all ampacity comes down to. <S> It's dependent on a lot on operating conditions and what you are willing to put up with.
I'm willing to bet copper tape will dissipate heat to air (or metal) more readily than a wire, and could possibly tolerate more current than 14A Or if voltage drop is your limiting factor then run enough current through it that your voltage drop is more than you can tolerate it. The way to use this specification is to count how many squares are in the length of your tape.
If energy is drained from a capacitor linearly, does capacitor voltage decrease linearly as well? If a charged capacitor is connected to a current source and resistor as shown above, the resistor is constantly consuming 0.001J/s. If you were to make an energy-time graph of the capacitor, you would see a straight linear decrease. These equations hold for a capacitor: \$ I = C \frac{dV}{dt} \$ (1) \$ V = I\frac{t}{C} + D \$ (2) <= this is a linear decrease of voltageThus, because I is also constant, you would see a straight linear decrease in a voltage-time graph as the capacitor is discharging. However, the energy of a capacitor is \$E = \frac{1}{2}CV^2\$ (3) and the voltage is \$ V = \sqrt{2 \frac{E}{C}} \$ (4) If the energy inside the cap is decreasing linearly, how can the voltage be also decreasing linearly according to that equation? <Q> If the energy inside the cap is decreasing linearly, how can the voltage be also decreasing linearly according to that equation? <S> In your circuit, the current source is absorbing or supplying energy, so the resistor is not the only place the capacitor energy can be transferred to. <S> When the capacitor is charged above 0.1 V, the current source will be absorbing energy, and when the capacitor charge is below 0.1 V, the current source will be supplying energy. <S> If you consider the energy absorbed by both the current source and the resistor, they will add up to the energy being discharged from the capacitor. <S> If you were to make an energy-time graph of the capacitor, you would see a straight linear decrease. <S> This is not correct. <A> You will see a linear decrease in the capacitor voltage, but that is because you are drawing constant current, not constant power. <S> Energy is the integral of power over time, so it makes little sense to refer to "constant energy" unless no power is being drawn at all. <A> If the energy inside the cap is decreasing linearly, how can the voltage be also decreasing linearly according to that equation? <S> The initial conditions aren't specified which are important for any capacitor calculations. <S> But the current source will try to force 100mA through the capacitor, the only way to do this is to continually increase (or decrease depending on which direction your looking from) the voltage. <S> Now if we started the capacitor with a charge on it, we would see the voltage change, but the current source would keep on charging the capacitor forever (because it has infinite energy to expend, and our capacitor is ideal). <S> It doesn't make sense to worry about the energy when the source can expend energy infinitely
The power you're drawing from the capacitor decreases with decreasing voltage.
How is a simple push button implemented in professional grade electronics I just want to ask how is one of the basic components of electronics a push button implemented in professional grade equipment? The most basic of a tactile switch circuit might look like this: This itself is good enough in theory, but we know that in practice we will encounter some problems; such as switch bounce. It can solved by adding capacitors on the switches, adding a delay in your code and checks again if it was a valid press, or doing both. This solution is simple and easy to do, and is the go to of almost everyone doing DIY. Now for my question, how do big companies where a button not registering could mean bad PR for their products and their brand. Examples of what I mean are, iPhone's (or other phone brands) home and volume button (6 plus and older on iPhones), keyboards, mice, arcade games, etc. Here are some of specific questions on my mind: on high end keyboards that has "anti ghosting", how did they implement this anti ghosting feature? Is this done in hardware or software or both? Are they still using a matrix of buttons to save some space on the keyboard chip? Is there an IC for buttons that handles everything (or most), pull down/ups, capacitors, multiple button press, or some other feature, that just send a very clean signal to your controller chip/computer? In software what are best practices in registering a button press, rising/falling edge, the classic approach, interrupts? <Q> First of all, bear in mind that all the approaches you cite are valid, it all depends on a lot of variables such as cost, space, already validated solutions, and so on. <S> To get to your questions: <S> I assume anti ghosting is some sort of feature that avoids registering double strokes, when the user only did a single stroke. <S> But what is a double stroke? <S> There is a time limit below which you say that the double stroke is invalid, e.g. 100 ms -> <S> this is done in software. <S> yes, keys are organized in a matrix to save IOs. <S> Keyboard controllers are IO limited, i.e. the silicon itself can be very small, but you need the IOs, so the full solution will be big (silicon + package). <S> Matrix means less IO means less space means more money for your company there are keyboard controllers with USB connection on one side, and 10x10 or 10x12 matrix on the other side, they take care of everything best practice depends on what is best for your case, rising or falling edge depends if you want an action to happen on keypress, or key release. <S> Interrupts is usually the way to go, but polling is acceptable in many cases too... <S> Again, it all depends. <A> A keyboard tend to be inexpensive, extremely high volume products that can support the development and use of a specialized IC. <S> Any extremely high volume product are is likely to use a specialized IC designed specifically to support many of the function required by that task. <S> At some point, it's so ubiquitous that it becomes integrated onto an MCU (like capacitive touch). <S> There are dedicated debouncing ICs. <S> I don't know of any that generally handle multiple button presses and holding or anything like that. <S> But I'm sure some specialized ones exist for use in things like clocks, but they probably don't only do debouncing. <A> Keyboard circuitry looks like this, most of the debouncing is handled in the mechanical design. <S> There is a pad and then a dimple on the opposite side, when a key is depressed they touch. <S> This is somewhat different from other switches that have springs, <S> the bigger the spring the more likely it is to bounce. <S> Since keyboards have metal contacts that are small and dampers below the keys they are less likely to bounce. <S> Earlier keyboards did have springs and needed some debouncing, modern keyboards use a membrane switch that is not as 'bouncy'. <S> Source: <S> http://www.technologyuk.net/computing/computer-hardware/keyboard.shtml <S> Is there an IC for buttons that handles everything (or most), pull down/ups, capacitors, multiple button press, or some other feature, that just send a very clean signal to your controller chip/computer? <S> It's typically on IC with maybe a few passives like pull ups something like this: <S> Source: <S> http://www.quickbuilder.co.uk/qb/libs/dataentry.htm <S> In software <S> what are best practices in registering a button press, rising/falling edge, the classic approach, interrupts? <S> The first thing you need is a cleaned up signal from the switch, and usually to limit switching that happens faster than the clock signal. <S> There are a few ways to do this: 1) <S> On an FPGA or ASIC use a dual rank synchronizer to sync with the clock to prevent metastabilty 2) <A> Often as part of the code for that micro you have a "debounce" built in that <S> will account for this. <S> Often a reference of the number of time to count the button over a set period of time. <S> Usually in microseconds. <S> This may lead you down the right path. <S> https://www.arduino.cc/en/tutorial/debounce <A> When looking at keyboards of electronic instruments, they have just a diode for each button which enables the matrix to detect any combination without issues. <S> Also with computer keyboards the keys can be arranged so that there is extra room in the matrix so in practice <S> when using ten fingers it is impossible to hit a combination that would cause problems. <S> If only using a handful of buttons then they can be read without a matrix, like the NES and SNES controllers. <S> There has been a standard logic chip to scan a button matrix but it has been long obsolete. <S> Instruments usually just scan the matrix in software. <S> A microcontroller would just poll the matrix periodically, perhaps in a timer interrupt, or in the case of a PC keyboard, it has literally nothing else to do <S> so it can just poll it all the time unless USB peripheral interrupts when it needs attention.
Many finished products that are going to recognize a bounce will recoginze that using a microcontroller of some kind. On a basic level Arduino has a sample debouce code they offer. On a regular microprocessor use a Schmidt trigger with some hysteresis before connecting to an interrupt or GPIO.
How to avoid big mux in RTL design? When doing rtl design, mux is always used to select the input of a block/module, for example: input [135:0] dataA, dataB;assign FIFO_DATAIN=(sl)?dataA:dataB;myfifo xxx(.(FIFO_DATAIN) .... ); Since the input data is 136 bits wide, this may lead to timing violations and requires a lot of resources after synthesis. Would it be better to use the "case" or "if else"? If not, is there a better way? Thanks in advance <Q> I don't think it matters. <S> Synthesis will smash this down to 136 2:1 muxes regardless of how you describe them in HDL. <S> That's not that much in the larger scheme of things if you're building something with that large of a datapath (128 data, 8 enables, right?) <S> Insert register slices if it needs help to close timing. <A> It really doesn't matter how you write the code, it will be synthesized to the same thing. <S> A 136 bit 2:1 mux is really not that bad. <S> It's really the number of inputs that really dictates the complexity and causes timing issues, not so much the width, though that does place a large fanout on the select signal. <S> If that was a 2 bit 136:1 mux, then maybe you could run in to issues. <S> I have a design with lots of 256 bit wide muxes and it works just fine at 250 MHz. <S> Also, the tools could the muxes that get inferred there and combine them with downstream logic. <S> One thing that you might want to take a look at, though, is where that mux ends up in the logic and where the select line is coming from. <S> If the select signal is the result of a large, complex operation and the mux is directly feeding a lot of complex logic, then you could run in to timing issues unless you move the mux further down that logic and/or add a register on the select line. <A> Just as Hacktastical said, the synthesis will implement this with 136 small muxes. <S> What I'd like to add is the issue may be about the 'sl' signal which will connect with 136 muxes. <S> If the muxes are far from each other physically, it may be difficult for 'sl' signal to meet the timing requirements of each mux connected. <S> Each control signal connects with fewer muxes. <A>
If there is timing violation, you can try to insert FFs to separate 'sl' to several control signals. One way to avoid the timing problems of a big wide mux is to pipeline the selection if that meets your latency requirements.
Where can I find a list of PIC microcontrollers programmable and debuggable with a PICkit 2? Where can I find a list of PIC microcontrollers that I can program and debug using a PICkit 2 in MPLAB X? I am a hobbyist who would like to breadboard prototype a new project based on a PIC microcontroller. I'm selecting a PIC based on my needs, and would like it to be compatible with the PICkit 2 I already own. (I have previously used it with the dsPIC33 series). I can't find any way to use the Microchip parts selector to filter by ICSP/PICkit revision compatibility. I'm getting contradictory information from the datasheets. In the PICkit 2 readme , I'm told I can program and debug a PIC16F1938, but looking at the datasheet in section 32.0 I find that it is compatible with PICkit 3, with no mention of PICkit 2. As the PICkit 2 is quite an old device and not receiving updates, should I assume that the readme contains the final list of devices supported by its most recent firmware? And that datasheets for any devices I find will refer to the most recent PICkit available at the time of writing? The readme also contains the following warning ================================================================== NOTE: This list shows support for the PICkit 2 Programmer == software application. It does not show support for using the == PICkit 2 within MPLAB IDE. For a list of MPLAB supported == parts, see the MPLAB IDE PICkit 2 Readme. == (Typically in C:\Program Files\Microchip\MPLAB IDE\Readmes) ================================================================== which I take to mean that I might be able to program certain devices, but not debug them. Is there a version of the 'MPLAB IDE PICkit 2 Readme' online? <Q> The devices supported depend on which device file is installed. <S> The device file currently in use is shown in Help/About. <S> As far as I know the last official device file version was 1.62.14 . <S> Unfortunately the page that listed which chips it supported has disappeared. <S> The Wayback Machine has an archived copy which does not list the PIC16F1938. <S> That's not the end for PICkit2 though, because an unofficial editor has been developed for adding new chips to the device list. <S> I am using version 1.63.148 by GBert which does support the PIC16F1938. <S> Debugging is done through MPLab, which doesn't support newer devices with PICkit2. <S> Most older PIC16's do not have debugging support anyway, so I have never attempted to use it. <S> I check my code in the simulator first, then use generic real-time debugging techniques such as toggling pins and printing messages to the serial port. <A> Microchip considers the PICkit2 to be obsolete and the PICkit3 is not recommended for new designs. <S> That said many developers realy like the PICkit2 because the USB interface uses a generic HID mode device class supported by almost every USB host implementation. <S> There is a project to expand support of the PICkit2 to newer controllers. <S> See this link on the Microchip forum for information. <S> There are other topics on the forum that describe how to edit the device support file of the Microchip GUI for windows. <S> EDIT to improve this answer. <S> Arsmith asked: Is there a version of the 'MPLAB IDE PICkit 2 Readme' online? <S> I can find no direct link to the "Device Support.htm" file on the Microchip web site that lists devices supported by the PICkit2. <S> The latest version of any Microchip IDE that supported the PICkit2 is MPLABX v3.65, you can download all of the README files for this release at this link . <S> In this ZIP file you can fine the "Device Support.htm" that still has the PK2D and PK2P columns. <A> I guess this is a list <S> you are looking for. <S> But, as a lot of people mentioned: PICKit2 is a really old tool and there a so much nice and really cheap new ones. <S> e.g. MPLAB Snap
The best way to determine which chips a particular device file supports is to install it and run the PICkit2 Programmer application, then set 'manual device select' and browse the list of devices for each family.
0-10V on RJ45 jack - bad practice? I'm developing a control unit for ventilation systems. Beside switching those fans with a relay they are often controlled with a 0-10V analog signal. Now I found a product that uses a RJ45-plug to supply a 0-10V analog signal. As the controller's going to have network features as well, chances are that a user will mix up the fan-dimming-RJ45 plug with an Ethernet plug. So I wonder: might connecting this 0-10V output to a network harm a router/other network devices? isn't it generally bad practice in electronics to (ab)use a plug that belongs to a different standard for this purpose? I'd like to support this interface only if it is safe if accidentally connected to a network. The 4 lines involved are: 10V power supply for internal circuit 0-10V dimming signal ground tachometer signal Remaining lines are NC. Pin Assignment Fan's Circuitry <Q> General rule of thumb is that 0-10V signals stay inside the electrical cabinet, and only 4-20mA signals leave the cabinet. <S> This ensures signal integrity, since a low impedance current loop is more robust, and you know when there is no recipient anymore due to cable loss or unplugging. <S> Since the minimum of 4 mA is not reached. <S> Although it seems you are stuck with a 0-10V input. <S> This is probably so users can wire up a potentiometer to easily control the speed. <S> Power over Ethernet is designed to be compatible. <S> This is due to Ethernet using transformers, and POE only using multiple pairs to transfer power, if you stay within a pair you'll not receive more than only the ethernet signals. <S> But you need 4 wires, which is two pairs. <S> Meaning you could receive PoE and fry it. <S> If you are afraid of user mistaking the 8P6C of signals vs ethernet then maybe don't use 8P6C at all. <S> Or at least not for anything that isn't ethernet or other a balanced differential signalling. <S> Instead use RJ12 (6P6C) or DB9, or a plain terminal block. <S> You can get DB9 <S> so it is still possible to wire them on a standard UTP cable by a terminal block in the connector. <S> For example Phoenix Contact SUBCON-PLUS-M/AX 9 - 2904467. <S> Though expensive, I'm sure there are cheaper brands available locally. <A> It could be safe, but only if the 0-10V signal has a sufficiently high source impedance (which is probably not the case). <S> The danger is that depending on how the connector is wired, the 10V could be applied differentially across an Ethernet signal pair. <S> A commentor mentioned "Power over Ethernet", but that's a special case, in which the power is only applied common-mode to the signal pairs. <S> As a side note, Ethernet over twisted pair (XXX-base-T) is already "misappropriating" a connector originally designed for telephony. <S> Fortunately, the two applications are electrically compatible. <A> You could make your system compatible (not damaging) to ordinary networking equipment using the same method as used in power over Ethernet (PoE): <S> ( image source ) <S> You'd apply the 0-10 V signal to both lines of one signalling pair, and a reference ground to both lines of another pair. <S> Then the transformer at the receiving end would block the 10 V from reaching the low-voltage circuits of the networking equipment. <S> But your circuit could pick off the 0-10 V signal from the center taps of the primary sides of the isolation transformers and use it as you wish. <S> You'd probably want to design your equipment not to be damaged if 48 V were applied where you expect 0-10 V, in case somebody connects a PoE source to your circuit.
If the current isn't limited by the source impedance, this could easily burn out the Ethernet transformer winding.
How far can 3.3V UART go at 38400 baud? I'm developing a set of boards that will all be listening to a single 3.3V 38400 baud UART (single TX device, my boards all listen to this line, no "RX" line). They are designed such that there are two RJ12 jacks per-board, and the signal and ground is just passed between the RJ12, with the signal going to the uC on each board. The idea is to daisy chain up to max 5 of these boards, with less than a foot of cable between boards. Here's what the signal looks like after 1x 6" cable, a board, and another 6" cable. How far can I reasonably expect this setup to work? I'm considering scrapping the whole thing and using RS-422 or RS-485 drivers, but that feels silly when these boards will have a 6" cable between them. On a broader scale, how do you predict usable cable length given a baud rate and voltage level? <Q> To find the distance a signal can go the physical layer of communications needs to be specified. <S> RS-422 <S> & RS-485 define a physical layer that can be "looked up". <S> Descriptions such as UART do not define a physical layer and so are difficult to "lookup up" or comment on. <S> The cable used for long communications connections can play an important part in delivering a strong signal while mitigating the effects of electrical noise. <S> Registered Jacks sometimes use twisted pair cables to accomplish this. <S> But specifying an RJ12 Registered Jack does not ensure this type of cable is used. <S> There are several other concerns when dealing with communications over cables. <S> Ground Loops being among them. <S> In extreme cases <S> optical isolators are used to mitigate the effects of Ground Loops. <S> For this particular case, 0 to 3.3 V swing at 38400 baud over a 6 inch cable, it can only be said that it is a bit surprising that it does not work. <S> Consider the wiring is faulty or the signal is logically inverted. <S> Then consider that the protocols at both ends do not match (such as parity, number of bits & length of stop). <S> Finally consider the speed it too great for the hardware or software to handle. <S> On a broader scale, how do you predict usable cable length given a baud rate and voltage level? <S> It would be difficult. <S> Many other factors affect the signal such as line, source and termination impedance. <S> Inspecting the signal with an oscilloscope at the source and destination may give some insight. <A> The limits cited for RS-232 are typically 50 feet or a max cable capacitance of 2500pF for rates up to 20kbit/s. <S> Longer distances or higher rates are possible with low-capacitance cables. <S> Now, with the 3.3V TTY link you're proposing? <S> Not so much. <S> The noise margin of such an approach is suspect at best; you have multiple boards to deal with so ground loop issues can be a problem. <S> If you insist, you should consider rebuffering the signal. <S> Your intuition to use RS-485 is a good one. <S> It would solve those problems and help future-proof your design. <S> RS-485 avoids a number of issues that RS-232 has, such as noise immunity and more-limited speed at distance. <S> Since you’re already using RJ12 you have enough pins, so might as well go with that. <S> And it can share the 3.3V supply with the rest of your logic. <S> Oh, and you can choose RS-485 that doesn't load the line down if the module power is off. <S> This isn't so straightforward with logic-level interfacing, though it can be done. <S> Going <S> Further <S> There is a popular protocol used for theatrical lighting called DMX512, that functions exactly the same as what you’re building. <S> It’s a multi drop serial line, and each client is addressable. <S> DMX512 uses RS-485, with up to 64 clients per loop and up to 400m in length running at 250 kbit/s. <S> More about DMX512 here: <S> https://learn.sparkfun.com/tutorials/introduction-to-dmx/all <S> At the risk of killing your enthusiasm, are you possibly reinventing the wheel with your MIDI-to-solenoid idea? <S> MIDI to DMX: https://www.instructables.com/id/MIDI2DMX/ <S> DMX solenoid driver: https://www.amazon.com/Switch-Dmx512-Controller-Output-Control/dp/B00S9KABRA/ref=asc_df_B00S9KABRA/ <A> Electricity travels 1 nanosecond per foot (+-). <S> 40,000 BAUD system has 25,000 nanoseconds per baud, or 5 miles. <S> At twisted-pair (SWAG) of 30 picoFarad/foot, the buss drivers need to handle? <S> the TwistedPair impedance of about 100 ohms, or 50 milliAmps for 5 volt bus levels. <S> If you get reflections, then all bets are off. <S> To avoid problems with reflections, have slow rise/fall times, perhaps 10% of the baud time, and let the reflected energy get absorbed during the edge times. <S> That limits you to 0.5 miles.
With actual RS-232 buffering a simple daisychain connection should work just fine at that baud rate.
Can sampling rate be a floating point number? Suppose we have a sampling frequency for a signal of 15.5 samples/sec and we take samples for a period of 7 seconds. This means total samples are 108.5, does this make any sense? Shouldn't the number of samples taken be an integer like 108 or 109? Or can the particular points in time from 0 second to 7 seconds on which to take the samples be determined in this case? How would one do that? <Q> Forget sampling rate for a few seconds... <S> Think about sampling period for a second, which is the time interval between two consecutive samples. <S> This time can be an integer or any real number (as long as it’s positive, of course). <S> Sampling rate is simply the inverse of sampling period. <S> Does it make more sense this way? <A> Yes, the sampling rate can be any number you want. <S> But you obviously would not get partial samples in the end, you just have to round down. <S> In your example the first sample is taken at \$ \frac{1}{15.5}s <S> \$ = <S> 64.5 ms <S> and then at every multiple from that. <S> This means you get your last sample at 6,966 s. <S> That is the 108's sample. <S> So at 7 s you still have taken only 108 samples. <S> And then at 7,0305 s you get the next sample. <S> You can imagine the samples beeing taken in a way like this dirac comb : <S> If you stop sampling between 3T and 4T you do not have partial samples. <S> You just round down. <S> T is the inverse of the sample frequency, or in your case 64.5 ms. <A> Some things are always an integer. <S> Samples are always integer. <S> You can take 108 or 109 samples. <S> Sample rate can be a floating point number, or more generally <S> a rational, or even a real. <S> You calculate the sample rate by dividing the number of samples (less one to get the number of periods between samples) by the time it takes to obtain those samples. <S> Generally a floating point number is an approximation to the real number you want. <S> With double precision, it's a very good approximation, but it's usually inexact. <S> It might be in error a small amount, due to the approximation of floating point representation. <S> It might be in error a lot, because the source of your information chose very approximate numbers, or even made up the numbers to start with. <A> This means total samples are 108.5, does this make any sense? <S> Only in a limited sense. <S> Since your sample interval of 7 seconds is not an integer multiple of the sampling period (1/15.5 Hz = 0.064516... s), it means that any arbitrary 7-second interval will contain either 108 samples or 109 samples, and the average across all possible 7-second intervals will be 108.5 samples. <S> If you take a series of contiguous 7-second intervals, you'll find that the sample counts alternate between 108 and 109, again resulting in an average of 108.5.
If you're given a sample rate, and a time, the product might be an exact integer, if the numbers are chosen carefully, but it probably won't be.
Can we connect Earth , Neutral and Digital Ground together in electronics system? In our project has Digital Ground , Earth and Neutral.As per the requirement Digital Ground and Earth ground tied through Bead inductor. Is it advisable to connect Neutral with Chassis and Digital Ground ?? simulate this circuit – Schematic created using CircuitLab <Q> No, do not connect neutral to earth or digital ground. <S> In some parts of the world the mains plugs are not polarized <S> so there is a 50% chance it ends up connecting live to earth or digital ground. <A> It may depend where you are, but in the UK neutral and earth must not be connected together downstream of the supplier's intake. <S> Joining N-E would trip any Residual Current Device (USA: GFCI) <S> A surge protector circuit might be connected across L and E <S> but this should not pass appreciable current under normal circumstances. <S> The amount of residuial current permitted to be passed, and the effect of EMC filters on leakage, is discussed here and the effect of common mode noise currents on RCDs here <S> The reason why N-E would trip any RCD is that the current passing through L and N must be equal. <S> If there is a fault (or link between N and E) <S> the full current will through through the RCD L, but some will return through RCD N and some through E. Therefore L and N will not balance, and the RCD will trip. <S> This diagram from <S> DIYnot <S> illustrates: <A> In a general way, the earth is connected to a metal plate buried near the point of usage (in your garden for example). <S> It is used to protect yourself from electrocution by connecting all metal plates of your device to the earth, this way, during an electrical short-circuit the current will "prefer" return to the earth through the cable than going to the earth through you. <S> For example, the metallic carcass of a washing machine is connected to the earth. <S> The Neutral is connected to a metal plate buried in the Earth at the Transmitting Station, it provides a reference to the line supply (your +5V pin on an arduino card for example, the neutral would be the GND pin in an Arduino card). <S> The earth wire is not supposed to carry current in normal condition, otherwise, it may have some charges across and it will become hazardous. <S> The digital ground provides a reference for the voltage of digital logic. <S> Now the same way you should separate Neutral and the Earth wires, you should separate digital ground and the Earth wires for the same reasons. <S> The Neutral and the digital ground might be separated depending on your application. <S> The main advantage is to isolated different applications. <S> For example, the neutral could be used for analog purpose while the digital ground is reserved for digital purpose.
In a general way, the Neutral and the Earth should be separate .
Understanding this peak detector circuit I think the circuit attached below is a peak detector because of the diode, resistor and capacitor at the op-amp's output. What I don't understand is the diodes on the input side. The input signal swings positive and negative. In the positive cycle, the signal goes through D2; what's the purpose of R2? In the negative cycle the signal goes through D1 and R1. R3, C1, and R1 forms a low-pass filter network, right? Correct me if I'm wrong. I would appreciate any additional detail as to the operation of this circuit. <Q> Assuming ideal diodes, for \$V_{in <S> } \lt 0\$ <S> this is an inverting op amp with gain \$\frac{R_3}{R_1}\$ , and \$R_2\$ <S> keeps the non inverting input from floating. <S> For \$V_{in} \gt 0\$ , this is a buffer. <S> The circuit probably makes the most "sense" if \$R_3=R_1\$ , but <S> if \$R_3 \ll R_1\$ , than you are getting something akin to half wave rectification instead of full wave. <S> You are correct about the low pass filter, but since you are low-pass filtering a rectified signal, this is more like an rms-filter or envelope detector. <S> I wouldn't call it a "peak detector", as those usually store the peak value on a cap with no discharge path. <S> Here, the cap discharges through a resistor, so the voltage is not stored. <A> The opamp circuit is a full wave rectifier with some level of noise rejection when the input peak amplitude starts to fall below about 0.6 volts. <S> Positive voltages are amplified by D2 (D1 is blocking) and negative voltages are amplified and inverted via D1 (D2 is now blocking). <S> R2 is needed to bias the non-inverting opamp input when the device is inverting <S> i.e. D2 is blocked. <S> R1 and R3 should be identical values. <S> The low pass filtering effect of C1 affects both positive and negative input voltages differently and is probably incidental to the whole circuit operating as a full wave rectifier. <S> Because it is a full wave rectifier, the envelope detector is fed with twice as many carrier cycles per second and hence it can deliver a better performance compared to when feeding the raw input signal directly to it. <A> They face different directions so that each compensates for the positive drop on the output diode.
The diodes on the input are intended to compensate for the diode on the output.
Is it possible to connect one electrical device to two electric sockets? Let's assume I have an electrical device like a tv set connected to a power outlet and it is running. I want to switch the electrical connection to another socket without interrupting the tv show I am watching. Could I clamp on a second power cord to the wires in the first one, plug the second cord into the second socket, and then pull the first plug? What happens, if anything, if a device is plugged into two electical outlet sockets at the same time? Does it make a difference for AC versus DC? Do the phases of AC have to be in tune? Does the voltage change (e.g. add up)? <Q> Best case, a loop. <S> Worst case, short circuit. <S> The simple way is with a suicide/widowmaker cable. <S> The other way is with a hotplug field kit. <S> Often used by forensic teams to confiscate computers and servers without turning them off. <S> Which is a really unique bit of kit. <S> The correct way is the get a UPS for the TV and just swap the UPS plug. <A> It’s one of those ‘ <S> just because you can, doesn’t mean you should’ things. <S> File <S> it under ‘Darwin Award Candidate’, cross referenced to ‘Hold My Beer and Watch This’. <S> If - and this is a very big <S> If you get it wrong? <S> Worst case you fry your cord, start a fire or some other calamity. <S> Makes me think of this <S> : Is it too much to ask to get a multi-room DVR instead? <A> If you are doing this with sockets connected to the same power lines and you connect live to live and neutral to neutral then it is nothing which can theoretically misbehave.
If - you had the hot/neutral phases right, and you were connecting to the same hot phase, it would be possible to switch over using this method. But if you flipped the wires, a good short circuit awaits for you. Best case scenario is you pop a breaker. After all, you are effectively wiring two identical wires in parallel.
How to determine the decoupling capacitor values for the power bus of an RF device? While trying to implement an IoT solution using a quad-band mobile network module, supporting GSM,UMTS,LTE (2/3/4G), I advised to add all the following capacitor values to the modules power bus. I have read several EE-SE answers on the subject and now understand that they are used for decoupling and evening out both LF and HF noise variations on the power bus. Some relevant posts include: What's the purpose of two capacitors in parallel? Decoupling capacitors: what size and how many? Multiple identical parallel capacitors How to know exact Decoupling Capacitor values for supply voltages? how can i calculate decoupling capacitor value? How to choose capacitor for an IC Calculating the value of bypass capacitors for an amplifier Is there a formula to determine the size of decoupling capacitors? To summarize briefly: Large Capacitors handles low frequency noise and output load changes. Small capacitors handle noise and fast transients. Parallel capacitors results in a lower Equivalent Series Resistance (ESR) than a single capacitor of larger value. LF capacitors (with higher ESR) have good performance in a wider range of frequency. Using multiple capacitors would not only reduce reduce the heat generated (by ESR), but would also help spread the heat. A decoupling capacitor is not only chosen by its Capacitance, but also by its ESR ( Equivalent Series Resistance ) and its ESL ( Equivalent Series Inductance ). Q: (a) How are these values determined? Q: (b) Why does small capacitors handle transient noise better than larger ones? <Q> You are somewhat misguided about the purpose of decoupling. <S> It is not mainly the noise that is problematic. <S> ICs, especially digital ones due to the harmonics of the current pulses that they draw from the power supply, need a low impedance power source. <S> This is because the MOSFETs inside the ICs draw virtually all of their current when they are switching and they are all switching at the same time. <S> The problem is that if you remove the capacitors, the impedance between the IC and the power supply will be too large. <S> Hence a voltage drop will develop and the voltage at the power pins will move outside the allowable range. <S> Another problem is that the current will not be able to rise fast enough and the IC will be starved. <S> So how do you determine what capacitors to use? <S> For low frequency applications you can just use some rules of thumb, but for high frequencies you need to be more careful. <S> Firstly, an impedance profile is created for the IC. <S> This is determined by looking at the allowable voltage range in the specification. <S> You can assume that a digital IC, for example, draws its current in the form of pulses that are around 10% of the period. <S> You can then take the Fourier transform of the pulse (you get a sinc) and calculate the required power supply impedance. <S> Secondly, S-parameter files are either downloaded from the capacitor manufacturer or are created by testing various capacitors. <S> The impedance profiles of capacitors look like this: source <S> Finally, FEM simulation is done in something like ANSYS SIWave for power integrity. <S> Diffferent configurations are attempted by first placing virtual capacitors on the imported PCB from your CAD package and then editing the PCB in your CAD package and re-importing it into SIWave. <S> This is done until you satisfy the target impedance profile and suppress all the resonances (the planes can form resonant cavities) in the power planes. <S> This is how it’s supposed to be done anyway. <S> Not everyone does it this way, but if you follow this procedure, you can make sure that your board works and passes compliance testing on the first try. <A> Selecting decoupling capacitors isn't an "exact science", the exact values do not matter . <S> Small value capacitors need less compromises to fit an amount of capacitance in a small space so in a small value capacitor we could for example use thicker conductive plates so that the series resistance becomes smaller. <S> Simply put: small value capacitors can be made "more ideal" with less compromises due to size restraints. <A> There is the brute force method or exact science method <S> The brute force usually relies on experience of which caps have a maximally low ESR over a 1 or 2 decades of freq or high SRF, so values are spread this far apart over the spectrum of interest using an impedance ( \$Z\$ ) map vs \$f\$ . <S> for a more exact design , you need to know how to test a prototype for ingress, egress to know how much ripple spectrum suppression is needed <S> the sensitivity of conducted voltage noise spectrum to performance error (phase noise, jitter etc) in [ \$dBmV\$ vs \$f\$ ] <S> the load regulated error noise voltage spectrum in same method or using dV/dt method <S> Otherwise you can analyze load impedance and sensitivity to error, then design/choose caps with source ESR, \$\tau= R_{ESR}*C\$ of each to be much lower in [dB ohms] <S> than source R or load R changes. <S> This means a solution with high impedance load or a short low resistance load with a short spike current can be solved by dV/dt+ I*ESR and possibly choosing a large source impedance like xx Ohms to supply power with a smaller low ESR cap. <S> I would approach this with Bode plots for each part or use S-parms from Mfg. or use real RLC simulation for each Cap, supply and load then perform startup and transient tests using a circuit simulator (for example in falstad ), where there are filter design Bode plots and electronics DSO time domain simulation.
Many engineers just use capacitors which they already use elsewhere in their design (to keep the BOM shorter).
Why does the cable resistance jump from a low value to high value at a particular frequency? I am not well versed with transmission line theory so if you can redirect me to relevant material I'd be grateful. So I used Agilent 4294A to find resistance of a 2 metre long shielded twisted pair cable (BELDEN 3105A E34972 1PR22 SHIELDED) and the resistance across frequency looked something like with a discontinuity at 5MHz. At 4.99 MHz it was about 2.04 Ohms and 23.5 Ohms at 5.01 MHz. This trend was there in impedance as well. I feel I'm missing something fundamental here. <Q> Your tooling seems to be the cause there, not the cable. <S> From https://www.keysight.com/main/editorial.jspx?cc=US&lc=eng&ckey=1428419&nid=-32775.536879654&id=1428419 <S> The 4294A extends its measurement frequency range up to 110 MHz by terminating each measurement terminal with 50 ohm in order to eliminate the resonance of test leads (including leads inside the 4294A). <S> The measurement discontinuity is caused by the change in termination impedance at 15 MHz when the ADAPTER is set to NONE or at 5 MHz when it is set to 1m or 2m. <S> The measurement discontinuity can be removed by performing LOAD compensation. <A> Something as simple as a cable does not have discontinuities like that. <S> There may be a clue in the fact the problem occurs at a nice round number, 5MHz. <S> Is this a place where your test set changes ranges? <S> Maybe it changes output amplifier, or filter, and one of them is broken or damaged. <S> The fact that you've quoted measurements at 4.99MHz and 5.01MHz without listing them hints that you have more data hidden that might throw light on what's going on. <S> Listing spot measurements at a few selected frequencies is fine when everything is behaving itself, but not when you're hunting for an anomaly. <S> The detail of the response adjacent to 5MHz will be very valuable. <S> Please edit your question with a plot of all the data you have taken, which may allow us to make better guesses. <S> A connection schematic to show exactly how the cable is connected to the analyser would be useful as well. <A> Consider the cable (I assume coax) as a string of small inductors with capacitors at the junction of each pair of inductors to ground (the shield). <S> At low frequencies the inductors act as they would with near DC signals (a wire) and the capacitors would be near opens at the near DC signals. <S> As the frequency goes up the inductors have more reactance and the capacitors have lower impedance, eventually forming effectively a series of LC filter poles. <S> At some frequency the combined filter characteristics will become pronounced, especially with an unterminated (50-75 Ohms) line. <S> Add the correct termination resistance and things should look a lot better behaved. <S> Most coax cables do have an upper limit of usefulness due to the inter electrode capacitance. <A> The effect you have observed has nothing to do with transmission lines. <S> You need to consider 'skin effect'. <S> You'll find it in any good RF textbook, such as Terman, Radio Engineering. <S> The higher the frequency, the smaller the skin's cross-sectional area, and hence, the higher the resistance. <S> To a first approximation, the current carrying area is inversely proportional to the square root of the frequency. <S> This explanation covers your first 6 data points, but the 7th is more likely to be a resonance effect related to your measurement technique. <S> It would also help to identify your units of frequency.
Basically, as the frequency increases, the main current flow moves further from the conductor's centre, ie, the current flows in the skin of the conductor.
Contact Closure over distance I have a contact (pushbutton) that's about 10 metres away of a Raspberry Pi. However; it behaves quite erratic and would trigger when for example an exhaust fan is started in proximity of the wire. I believe the AC electromagnetic interference causes it to trigger. What I've tried: Software fix #1 Whenever an active signal is received; I wait 100ms and check again. If it's active twice; it's verified. This actually 'fixes' the problem; though I would like to know how to make the hardware layer more resistant against emi. Soft/hardware fix #2 Enabled internal pull-up of the Raspberry Pi. In this way it has a predefined state and thus has less interference (though still notable amounts). Hardware fix #1 Use an external pull-up resistor. I've tried 5K (not quite good enough) and 1K (seems about right), though various sources discourage low resistance as obviously causes higher current. Hardware fix #2 use an cat5e foiled network cable instead of regular power cable. This didn't quite seem to work (its foil wasn't really connected to anything), should it be connected to GND? (or is shield not GND?). The resistance of the network cable could be higher as electical cable; since it has less copper? What I'm searching for: Actual proper ways of implementing such systems; possibly for up to 50m. I'm thinking of putting 24V on the wire with the button and a opto-isolator or relay on the Raspberry Pi's side. Though this would require quite some extra components and I don't understand why it would be required at ~10m. Theory and best practices? How can we calculate the required amount of voltage? Should I use a shielded cable (how to connect the shield?) or a cable with low resistance? <Q> As Neil_ok says: The bomb-proof way, up to many km distances, is a current-mode link, into an opto-isolator. <S> A simple solution is to make it difficult for external sources to put energy in your system. <S> A very low input impedance helps with this. <S> e.g. you can put a 220 Ohm resistor from the input to the ground. <S> Lets says about 82 Ohm which gives 220/(220+82)*3.3 = 2.4 Volts. <S> You can add Niel's capacitor too. <S> The disadvantage is that this continuously uses 3.3/(220+82)= <S> ~10mA. <S> When the switch is pressed, the 220 ohm resistor is shorted and it uses 3.3/82 = ~ <S> 40mA. <A> It just reduces the energy present in transients substantially. <S> Any common mode voltage induced in the wire is now not having little effect on the uP side, as the two wires are free to be at any voltage (within sensible limits). <S> Also it is good practice to use twisted pair cable for this, which also helps ensure that common mode signals are of equal magnitude. <S> When the connection is direct, induced pulses are directly connected to the uP signal port and GND, which means all kinds of transients can get into your circuit. <S> You should pull the diode side of your opto (which is connected to your switch closure via a resistor) up to your system <S> unregulated DC (NOT the Vcc for the uP) or any other voltage source available - also helping isolation. <S> You can even use a totally separate V+/GND pair if available. <S> (I realise a diagram would be helpful here - I don't have time right now.) <A> Make sure you need a bit of current to trigger the input, not only voltage. <S> When you only use voltage, into a high impedance input, like 10 kOhm, then you will have erratic detections on nearby relays or parallel ac wires <S> and you will have to resort to dirty software method, if that works at all. <S> So, hardware fix #3 : <S> IEC 61131-2 Require <S> at least 2 mA to consider it a logic high. <S> If the environment is especially noisy, use the Type 2 with at least 6 mA. <S> This can be easily approached by a voltage divider into a schmitt trigger. <S> Optional RC to slow down detection. <S> See here for an example implementation .
But in order to make the input high when the switch is open, you need now a much lower pull-up resistor too. As soon as you have longish wires (I'd say 1m or more, but it depends on the environment) connected to a uP port, isolation (with an opto usually) is advisable, along with some filter components (R's, C's, clamp diodes).
What does "DC Current VCC and GND Pins" in ATmega8 datasheet mean? I'm using for some DIY project ATmega8-16PU, during the reading of the datasheet ( Datasheet Revision 2486AA–AVR–02/2013 ) I've encountered in section "Electrical Characteristics – TA = -40°C to 85°C" parameter, which is called "DC Current VCC and GND Pins". Value of this parameter is 300mA. I was looking in Internet for some meaningful interpretation of this parameter. What I found was however a lot of confusion on the topic. Here are three possible interpretation of this parameter, can you please tell me, which one is the right one. 300mA is total current into all VCC and out of all GND pins. 300mA is current into all VCC pins and there is 300mA out of all GNDpins 300mA is current into each VCC pin and 300mA is current out of eachGND pin The most reasonable explanation supporting last interpretation I found under following link: Allowed current thru AVR devices Depending on which interpretation is right one, I can e.g. change package type to TQFP in order to increase my current budget. Please note that I don't want to exceed any Absolut Maximum Ratings, what I want however is to try out exceeding test conditions in datasheet. <Q> To remove further confusion. <S> There is only one Vcc pin. <S> AVcc is a different power domain only for the ADC Clock system and PORTC. <S> Read the notes below the table. <S> This is the right answer: 300mA is total current into all VCC and out of all GND pins. <S> See also note 3.1 <S> The sum of all IOL, for all ports, should not exceed 300mA. and <S> The sum of all IOL, for ports C0 - C5 should not exceed 100mA. <S> Which is for Avcc, a special power domain for the analog part, not a normal Vcc. <S> The reason for these limits is the resistance in the leadframe and bondwire and metal layers on the chip itself. <S> A high voltage over this resistance has negative effect on the capabilities of other pins. <S> The voltage levels (V OL /V OH ) and thresholds (V IH /V IL ) may shift, possibly outside of the specification. <S> It may also add more heat than the package can handle. <A> Edit: I read the link you posted, https://forum.arduino.cc/index.php?topic=161354.0 , and it does contradict what I say here. <S> It states that the limit is per pin . <S> It appears to be a quote from an official support person, but the answer is so different from what I would have assumed that I would personally verify with their support again if I was going to be relying on it. <S> If you add the current going in to all of the VCC pins, it must be less than 300 mA. <S> Also, if you add the current coming out of all of the GND pins, it must be less than 300 mA. <S> (I don't understand the difference between your first and second bullet points.) <S> Also, be aware that these are absolute maximum ratings, and, as the datasheet says, "functional operation of the device at these or other conditions beyond those indicated in the operational sections of this specification is not implied. <S> " That means that even if you are running less than 300 mA, if you are violating some other parameter in the later tables, the part might be totally non-functional during that time, and possibly not until turning off the power to the part for a while. <S> All that it means is that the part will not be immediately and irreversibly destroyed. <S> It is pretty unusual to want to "try out exceeding test conditions in datasheet" unless you really know what you are doing and working with a large enough sample size that you can have some confidence that you can push the parameter farther than the datasheet limits. <A> I think you're missing a VERY important text in how you're reading the datasheet. <S> I think you found this on page 235: <S> The "electrical Characteristics" apply to the whole chapter <S> so all tables in this chapter. <S> Now what you missed: Absolute Maximum Ratings <S> This section can be found in almost any datasheet and they relate to values that should never be exceeded . <S> So that means, these values aren't for "normal operation", as you never want to come close to these values in normal operation. <S> Exceed these values and the chip might suffer permanent damage . <S> So <S> The direction of the current is not mentioned so it does not matter! <S> If the direction of the current would matter then that would be mentioned. <S> Also: this is not for normal operation <S> so there is no reason that the current has to flow in a certain direction. <S> For example when ICs are tested after fabrication or tested in circuit after soldering it is possible to inject or draw a current to test the connection. <S> That's not normal operation. <S> In this test the current must be less than 300 mA.
"DC Current Vcc and GND Pins ... 300 mA" means that the current flowing into or out of any pin named Vcc or GND cannot exceed 300 mA.
Got an electric shock from a USB type C cable plugged to a laptop, how can I find the faulty thing? I was about to charge my phone with the USB type C cable that was plugged to my laptop.When I grabbed the cable I got a light electric shock (I think I touched the plastic case that covers the metal but I'm not sure).When that happened, I wasn't even sure it was an electric shock or just something weird that just happened to my elbow. So I went ahead and touched the cable again, this time on the metal part. The shock was really painful this time and got me tingling :( So I decided to unplug the cable and not use it again. But I would like to find out if that is a problem with the cable, the USB port, or the laptop (in a way I don't get hurt again, of course) <Q> The shock could have been an electro-static discharge. <S> If you want to find out if the laptop is safe or not, get a multimeter and probe from one of the grounding points on the outside of the laptop to AC mains/earth ground. <S> (Usually the shields of ports are grounded). <S> Turn the meter on AC mode to measure RMS, then turn it on DC mode. <S> In both modes the voltage should be low. <S> I would expect the voltage to be lower than 1V. <S> If the voltage from earth ground to the laptop is more than 5ish volts then there could be a problem. <S> If it's in the 10's of volts range, then there definitely is a problem and your laptop is probably unsafe. <S> If there is a low voltage (1V-10V) present, determining the impedance (how much current the laptop is leaking) of the voltage source is the next step. <S> Turn the meter on to current mode, measure from AC mains ground again to a grounding point on the laptop, this time using a 100kΩ resistor in series (to be safe for starters). <S> The current should be lower than 1mA for both AC and DC (and if your meter is good enough, in the uA range). <S> Then use a 10kΩ, 1kΩ, 100Ω slowly working your way up to 10Ω. <S> If the laptop is sourcing more than 1mA, then there is a problem. <S> In this way you can check to see if the laptop is safe without touching it, make sure you don't touch the leads either or the resistor while testing. <A> IF the shock was mild the most probable cause is that the power supply is intended to have an earth connection (3rd pin) but does not have one, and has two "Y" filter capacitors on it's input side from each AC input line to its input ground lead. <S> This causes the disconnected ground to float at Vinput_AC/2 at an impedance of about Zcap/2. <S> Measuring from Vout_negative to mains ground should return a voltage of V_mains/2 or less (as the meter MAY slightly load the voltage. <S> The "solution" is to ground the ground lead on the power supply or the Vout_negative connection. <S> Worst case this may cause operational issues, but shouldn't. <S> Worst case operationally is that connecting this power supply to equipment, OR equipment connected to this power supply to other equipment, MAY cause damage or destruction of some of the equipment. <S> Ask me how I know <S> :-(. <S> (Long ago). <A> Confirm this by unplugging it and touching the USB cable. <S> The shock should go away. <S> For an isolated-secondary supply like a wall charger, a small amount of leakage is allowed (3mA) - low enough such that it is just detectable by touch, but not so much that it can harm you. <S> If it’s actually painful the adapter could be defective and should be replaced.
If the input ground is connected to Vout_negative (as happens) then the voltage is enough to cause an unpleasant sensation to a grounded user who touches it. In each case the current should be low. If the wall adapter was plugged in to the laptop, this is likely the source of the shock.
What are the advantages of this gold finger shape? Some PCBs, like the PCI card specification have gold fingers which start very narrow near the bottom edge, and gain their usual width much higher, where the actual contact is expected to be made. What is the advantage of having the narrow part? Why not make the pad fully wide all the way to bottom, like ISA cards, DDR, etc.? Or simply make the finger shorter, only in the area of contact? What is better in gradually increasing the width? My speculation: To connect ground pins first - All the pins have this shape. Resistance against peeling the pad off - The smaller trace seems much more susceptible to damage Insertion force - I expect the narrow part to be made of equally thick gold, which would require the same amount of force. Insertion force - Can it be that some number of the connector contacts (in motherboard) get pushed sideways in each stage as the card goes in, lessening the amount of force needed to insert the board? Can't seem to find any evidence or description why this is designed this way. Some high frequency high pin count stuff (DDR modules) use rectangular pads. Note: See page 196 of the linked PCI card specification document. <Q> To electroplate the fingers with gold they must all be joined together electrically. <S> This is done with a "plating bar" trace outside the final board area, which is cut off afterwards. <S> Usually the board edge will be chamfered for easier insertion in the socket. <S> Since chamfering removes the lower part of the fingers they only have to be wide enough to carry the electroplating current. <S> Making them narrower saves gold, which makes the board cheaper. <S> If the board is not intended to be plugged in often then chamfering may not be applied, and then the narrow parts remain. <S> Gold plating for edge connectors <A> Some PCB manufacturers mention some specific design requirements for gold finger edge connectors: <S> No plated through holes are allowed in the plated area No solder mask or silkscreening can be present in the plated area <S> For panelization, always place gold fingers facing outward from the panel center Connect all gold fingers with a 0.008” conductor trace at the edge to allow for manufacturing Features can be placed on one or both sides to a depth of 25mm from the outside edge <S> I am not sure about 4, but maybe they are referring to narrowing down the pads like on the picture you embedded? <A> I remember this was a major problem with inserting boards into the Apple II.
Bruce is probably right that cost is the major reason for that end of the contact to be narrow, but I believe this also makes cross-contact between neighboring pins less likely if the the board is not completely straight during insertion.
Is there a lower form factor alternative for common female header pins I am looking for an alternative for the usual female header pins, one that has a lower profile than the traditional 8.3mm/0.33in high female header pins. Adafruit has a low profile Female headers but when attached with the tradional male heade pins it still stands tall. It doesnt have to be a through hole component, I found one that i really like but i am failing to search its name. It from the raspberry pi POE HAT, a good thing about this (for my case at least) is it can be set on the other side of the board thus lowering the profile even more but if there are others please do tell <Q> Those in the raspberry pi POE hat are in these ones from Digi-key <A> Types I could find with a quick search looks like this: …or… <S> You can also embed holes into your PCB as described in another answer : <S> Hard to get any lower than that. <A> One option might be to use so-called "bottom entry" connectors. <S> They are SMD but can be contacted either from above or below - you leave holes in the PCB underneath the component. <S> This means that they can be very low profile, down to 2mm <S> or so. <S> Should be available from multiple companies: <S> Samtec, Wurth, Molex etc. <S> (This pic is from Wurth, here: https://www.we-online.de/katalog/de/PHD_2_54_SMT_DUAL_SOCKET_HEADER_BOTTOM_ENTRY_6100XX243021/ )
There are 'strips' you can get with individual sockets.
isolating DC from dmx signal I have a DMX controller using a 3 pin connector (gnd, data+ and data-). I need to connect it to some cheap chinese DMX lights. The problem is that the lights are powered over the same 3 lines using a 24 VDC power supply. When I did, it burned out my controller. DMX is a differential signal, so it might work, but 24V is just to much (and clearly out of specs of DMX standard, which allow for at most 7 volt). These are the lights so there would be no problem with other DMX lights.The 24 DC power is connected to gnd (pin 1) and data+ (pin 3) of the DMX ights and is needed to power the lights. As extra info, my DMX controller is running on the power of a usb port. When using a multimeter in AC mode (with the lights and 24V disconnected) It measures 2.5-3V on the data channel, while it is sending out some data. How to prevent my new (expensive) controller to burn out as well? (it shouldn't see the 24V) simulate this circuit – Schematic created using CircuitLab <Q> One option could be to insert a capacitor between the dmx controller data+ pin3 terminal and the signal wire. <S> A capacitor filters out the 24V DC supply completely so that it can not harm the controller. <S> At the same time, the AC signal should pass the capacitor. <S> (To learn more about this, ask google for "first order RC high pass filter", e.g. here ). <S> But it would be a bit of engineering to get the value of that capacitor right. <S> It depends on the bit rate and also the impedances of the controller and receiver. <S> You might want to start with a 1uF capacitor rated for 50V. <A> It sounds like the system is designed to be used with a power supply for XLR phantom power , which is normally used for certain types of microphone. <S> A standalone phantom power supply will both provide the power and isolate the output from that power, so that should be sufficient to protect your controller. <A> Unfortunately any kind of AC coupling isn’t going to work with DMX as it isn’t designed to maintain DC balance. <S> So no transformers or caps for example.
Optical isolation could be used on the TX (controller) side to allow the DMX pair to carry power.
How can I control 2 or more seven segments displays using a single microcontroller? Is it possible to control say up to 4 seven segments displays using a single microcontroller and having some I/O ports left? For example, say I would like to show a reading with 4 seven segments displays using a driver like MAX7219. Do I need a driver for each display? Can I control all the segments using just one driver and SPI communication with the microcontroller? More generally, I'm asking what is the industry standard way of doing this which I guess uses the minimum number of components. I know it is possible to build an analogue circuit to drive each segment, use an EEPROM, use a driver for each display, etc... but I am looking for a way that uses the least number of components. <Q> You can multiplex displays, however there is a limit to how low the duty cycle can get without adversely affecting the brightness. <S> That limit depends on how good the displays are (brighter more efficient LEDs generally cost more) and the specs of the LEDs (max peak current) and how bright your display needs to be visually, so it varies with the application, but typically 1/4-1/8 is about as far as you want to go. <S> Minimum component count might be a goal in small-volume applications such as oddball instrumentation, but for high volume applications, it's usually cost that is minimized. <S> If you're a hobbyist, then something like the MAX7219 which will control 8 digits might be a choice, especially since there are probably Arduino or whatever libraries available so that the effort is minimized. <S> Genuine ones are pretty expensive, far too expensive for many volume applications. <S> I believe there are cloned ones out there. <S> You can use one or more and select using the /CS lines. <S> The more usual approach in volume applications is to use the microcontroller as the display controller and add some inexpensive drivers. <S> The consumption of microcontroller bandwidth is pretty small (maybe a couple percent for an 8-bit microcontroller) provided the timer interrupts can be serviced with small enough jitter to prevent visual flickering of the display (probably a +/- <S> a handful of microseconds will suffice). <S> The number of I <S> /O can be handled by picking the microcontroller for that characteristic or by adding expanders or shift registers or demultiplexers (for digit selection). <S> You could also use a small CPLD, which tend to excel in having lots of <S> I/O vs. cost, but that requires device programming and writing the code in the first place. <S> Generally speaking, the optimum trade-offs will vary with each and every design. <A> Your multiplexer will just have to run at a multiple of your IO pin output signal to cover the 4 panels instead of one. <A> I'm not sure if there is an industry standard, but I would power the 7-segs one display at a time, cyclically, fast enough for the eye not to notice any flicker. <S> This can be done with a mux, or buffer IC, or 4 BJT or 4 MOSFET (if sufficiently fast) <S> etc etc. <S> Some MCUs may be able to drive the LEDs directly from I/O pins. <S> The segment lines are preferably connected to shift registers, one per 7-seg. <S> Daisy-chain <S> the shift registers and control it all through SPI. <S> There's no need for strange Maxim circuits. <A> The idea is multiplexing. <S> If you want to control 8 digits, you use 16 digital pins (8 for each segment, 8 for each digit). <S> If you want to control more, than instead of connecting these 16 pins directly to the MAX7219, use e.g. 8 MAX7219 ICs and multiplexer ICs to decide which MAX7219 is active. <S> Assume you use a CD4051 (8 channel multiplexer), you can use 16 of them and put them after each output pin. <S> This way you can drive 8 (multiplexer channels) * <S> 8 (digits/MAX7219) <S> = 64 digits.
I believe the standard way would be to use a multiplexer that can act as the signal switcher between your microcontrollers IO output pins and the display driver.
Some pads on a PCB are marked in clusters and I can't understand which one is which I'm trying to find some certain pads on a monitor circuit board. I found a cluster of pads that's marked as "A", I assume because there's no space to write what every single one is. And on the far right of the pad, there's a marking that says "a", and there are all the markings listed. The problem is - they're formatted as a list, so basically just one under another. How should I know which one is which? They're not just next to each other, some are on top or on bottom. So I can't just count them and proceed. Please, can you tell me which one is which? (Screenshot below) I searched for "reference designators clusters" but found nothing. <Q> sorry for the poor drawing. <S> i hope you can trace them. <S> I don't know the technical terminology for this. <S> We do this whenever there is no space to place the reference designator at the same time it can't be dropped all together too. <S> Then Reference designator will be grouped together but at some other point in the PCB where there is abundant space . <S> Care will be taken to see that the cluster will be exact replica of how it could have been in it's ideal place . <S> The orientation of the components will be represented by the orientation of the reference designator. <S> Naming the clusters help in locating them easily. <S> When it is easier we also have simply drawn lines from the component cluster till the label cluster. <S> If somebody knows the terminlogies please add. <S> remote designators <S> thanks to @RnDMonkey Test points for Production <S> During production of the PCBs (say 1000 s of them) <S> the testing will b done by automated machines. <S> The machines do not need any text. <S> They work by knowing the position of components. <S> Here, they use those big round test points. <S> Through these test points the machine can measure resistances, capacitances, inductances as well as voltages. <A> These look like test points, which are pads, pins or hooks used to test connections (to be able to place a oscilloscope or logic analyzer probe). <S> This way developers can test if a certain voltage or signal is present; but to do this you must know the meaning of that point (pad in this case). <S> It seems the text right of it might give a clue about the meaning. <S> For more info see: Wikipedia: Test Point . <S> As you can read, these are used during manufacturing or service, and not meant for 'users', that's why labeling is not 'needed'. <A> The test points are not on top of each other. <S> The grouping you've showed has four test points <S> (corresponding to the four net names in the label area) and pads for seven components (corresponding to the seven component names in the label area), some of them unpopulated. <S> The test points are all round and circled in the silkscreen. <S> There doesn't seem to be any ambiguity in positioning. <A> Outlining the groups and using corresponding lookup letters is used when the group of designators is not close enough to the components to clearly represent them. <A> I believe that this PCB has a multi function applications, and it depends on fitting or non fitting components on it. <S> So regarding what you showed us in your picture there are some unfitting elements such as R125 and R126, who are responsible to enable the LED. <S> So these test points are not for the customer service and are only necessary in the development stages, so companys leave this possibility open for them to make continuous improvements for their products. <A> The white writing relates to the capacitors, resistors and information about the actual test points. <S> The circular pads with white circles are test points for probes to multimeters, scopes or a bed of nails test fixture. <S> So looking at this picture we can say that the top row of components are capacitors and only one is there cc06. <S> The next row beginning with rr01 left to right are the resistors left to right 11 of them. <S> The circular pads from left to right start with info vgma1 then go left to right. <S> So the layout of the white writing to the right of the components etc relates to what is there. <S> I design and build test fixtures all day every day in work. <S> Here you can see a pbc in a fixture with the probe pins coming up underneath that touch the test points. <S> Hope this helps <S> Here is your pcb marked to make sense
I call these "remote designators" and as said by the helpful person with the illustration, these are meant to mimic the relative placement and orientation of the components they designate.
Possible to measure output of each battery in a battery bank? I've been working on a small project to add electrical to my utility trailer. I have two 12V 20Ah lead acid batteries wired in parallel for running a mix of lights, a winch, cameras, a ventilation fan, inverter, etc. (Basically more of a fun, learning project). The specific batteries are: https://www.amazon.com/gp/product/B00KC39BE6 Originally I was going to keep them separate and divvy up the electrical devices between them, so I purchased a battery meter for each one: https://www.amazon.com/gp/product/B01N642QV6 I've since learned about and changed over to wiring them in parallel connected to a single charger, but now I'm wondering if there is an arrangement where I can still use the meters to read each individual battery without reading the output of the overall bank? I tried to Google this, but I get the feeling that if this is feasible, I don't know the correct terms to search for. <Q> I don't think it is possible. <S> In parallel everything has the same voltage potential. <S> As you mentioned you can measure the output of the combined cells. <A> simulate this circuit – Schematic created using CircuitLab Figure 1. <S> The proposed circuit. <S> Once you wire the batteries in parallel you will have the circuit of Figure 1. <S> Red is positive and black is negative. <S> It should be fairly clear that all points on the red line will be at the same voltage (assuming you use a large enough cable cross-section) and similarly all the black line will be at the same potential. <S> You can't use them to measure current as they have only two wires and will require something in the region of 9 to 16 V to operate the meter and backlight. <A> I'm not sure you can, because you are forcing the same voltage in both batteries by connecting in parallel. <S> What you could measure is the current, that will vary with the charge of each battery, but I'm afraid that your meter is only for voltage. <S> I hope to have helped you!
Your two meters will therefore show the same voltage reading.
Why is the battery jumpered to a resistor in this schematic? I am looking at a schematic for the wifi D1 mini data logger shield that I'm using right now. In this part of the schematic, a coin cell battery powers the RTC. I see R5 is jumpered across the positive and negative terminals of the battery. I identified R5 to be 2MΩ, so it obviously isn't shorting the circuit. But that leaves the question, what is its purpose? Note: There is a sideways 1 above the battery symbol that looks like a minus sign. Don't be fooled! <Q> I don't know what this shield looks like, but I'm going to guess that the battery is removable/replaceable. <S> Is that the case? <S> Did it come with a battery, or was it up to you to provide it? <S> The DS1307 has a little quirk in that it requires that Vbat be grounded when there is no backup battery connected. <S> R5 is meant to pull Vbat down to ground in the absence of a backup battery. <S> Without that resistor, the DS1307 would not function correctly if one were to try and use it without a battery installed and powered via the ESP mainboard/VCC. <S> From the DS1307's data sheet: If a backup supply is not required, VBAT must be grounded. <S> Now, is it a good solution to this requirement? <S> ... <S> Not really. <S> For a 48mAh 3V cell, this will reduce the best possible backup time to 3 years, and the drain will occur even when the DS1307 is running off VCC. <S> However, I can definitely understand their reasoning. <S> It's a shield, it is aimed at a fairly wide audience <S> and they probably just made the decision that it would be better if the DS1307 worked in every possible usage case rather than keep time for a decade. <S> That said, as long as you aren't planning on using it without a backup battery installed, you can simply remove it and eliminate that source of drain on the battery. <A> I think R2 is there to cover the following case: When Vcc falls below Vbat, the device switches into a low-current battery-backup mode. <S> If no battery is present, Vbat can pick up EMC noise if not grounded. <S> Should that noise generate a voltage above Vcc, the IC may erroneously switch to battery-backed mode, and since there is no battery, data loss could occur. <S> That is, unless the probability of the EMC glitch while the battery is being replaced is so high that you're ready to trade 90+% of the battery lifespan for the extra safety. <S> However, if the device is expected to run without battery, the resistor should be there. <S> Battery life will be less important in such a case because it can be removed when not needed, while the probability of an EMC glitch will become significant. <S> Also note that the typical operation circuit from the datasheet doesn't include such a resistor. <S> I assume this is because the "typical operation" is considered to be the case where the battery is only removed to be replaced with a new one. <A> Actually there is a need for the resistor. <S> The datasheet states on page 6 that if the device is being powered by VCC, Vbat pin must be grounded. <S> If a backup supply is not required, VBAT must be grounded. <S> The resistor is placed in order to ground the Vbat pin when the battery is not used. <A> That's bizarre. <S> There is nothing in the DS1307 datasheet which suggests that any such resistor is required. <S> While 2 MΩ is a large resistance, it's still low enough that the current through the resistor (~1.5 µA) will be orders of magnitude higher than the data-retention current of the DS1307 (10 - 100 nA). <S> This may significantly reduce the life of the battery; I'd advise removing the resistor. <A> I suspect this increases the decay speed when the battery is removed that is required for some reason. <S> Low power consuming logic FET switches tend to draw a bit more micro power in the linear region and this RTC chip is known to be a good power miser such that when voltage is removed, it uses a capacitor as a power source from a separate pin. <S> (Have you read the data-sheet yet?)
If the use case is such that the battery is almost always present, and battery life is important, I would consider omitting such resistor.
What is the purpose/function of this power inductor in parallel? The inductor in question is part L3, a power inductor with both coils in parallel. Datasheet . The complete circuit is the power supply for an RFID reader, where +5V_A is the output for the antenna drivers and +5V for more general purpose stuff. The question: Why use a "power" inductor in parallel instead of just a "normal" inductor with 22uH @ 100kHz as the datasheet says for this one in parallel configuration? <Q> It's a "power" inductor because it needs to carry a significant amount of DC current. <S> It's exactly equivalent to a simple coil with the same number of turns, but twice the cross-sectional area of the wire. <A> Unless the designer is here, there is no way to really know for certain. <S> Perhaps it was a cheaper option. <S> Maybe it had the best performance under tests conditions compared with other inductors? <S> It saves having to buy a new component, and could reduce the price of the current one if they then have to purchase a higher quantity. <S> In some products I have designed I have used inductors for many different reasons. <S> Some of them the reason I mentioned above. <S> We already stocked that part and it happened to be the same value as I needed, so I used that. <S> Sometimes I just follow the datasheets of certain ICs and use recommended parts. <S> This could have been any one of my suggestions above, or possibly something else. <S> Other time, I have just used a cheap one where things aren't performance critical. <S> Other times I have needed performance so have tested the device with different parts and chose the one that worked best. <S> I don;t think there will be a definitive answer unless the designer happens to come by. <S> But the reasons I suggested are quite common. <A> Look into the Bourns SRF1260 datasheet: <S> Multiple applications: parallel, series, dual-inductor and transformer. <S> Clearly the designer opted for parallel connection, thus doubling the current rating.
Connecting the windings in parallel gives you the same inductance as either winding alone, but with twice the current capacity and half the DC resistance. If the company that designed this already use that component in another design, it makes sense to re-use it on another product.
What does measuring voltage do to the current or signal of a pin? I have a problematic digital signal which starts working when I measure the voltage between the pin and ground. What's the effect of a voltmeter on a pin (or to rest of the circuit) when you measure voltage? Does it mimic: - a small or large resistor in series or parallel? - a pull-down resistor? - a capacitor? - a better ground connection for the pin? - more wires between the pin and target? - a ferrite bead? - other suggestion? I'm looking for a way to implement the obvious effect the voltmeter has on the signal functionality but on the circuit proper. Can one even state what effect measuring voltage has on that which is measured, or can it behave like any/all of the above suggestions? EDIT - SOLVED, see answer below <Q> A voltmeter acts like a high-value resistor and a small-value capacitor between the two points where the probes are attached. <S> If one of the probes is on ground, then you could consider this a pull-down resistor. <S> The resistor will be in the 1M \$\Omega\$ range for an inexpensive meter, much higher for a good lab-quality instrument. <S> If the point where you place the probe is floating <S> then the resistance can be significant. <S> It usually doesn't matter for logic circuits. <S> I don't know, but I would guess that the capacitance might be on the order of 100pF or so. <S> A third effect, that wasn't on your list, is pressure. <S> When you press the probes onto a circuit node you may improve a poor connection. <S> This is common when there is a poor solder joint, and the pressure of the probe forces the connection to be made. <A> Your DMM will only update its displayed measurement something like three times a second. <S> So if its a signal slow enough to be seen and measured on a voltmeter <S> then it's only the DMM's input resistance you need to concern yourself with. <S> That will be stated in the DMM manual's Specifications section. <S> You should be able to find that for your model on the interweb. <S> It's typically in the order of 10 MOhm. <A> Start with a reputed multimeter manual. <S> For example, fluke. <S> The input impedance varies sometimes depending on the range of the voltage we are measuring. <S> The voltage scale I mean. <S> If you are planning to do simulations, then I think it should be good enough. <S> There are several Multimeters with varying input impedances depending upon the signal type you are measuring and also the features present in the multimeter. <S> All the good suppliers provide sufficient information in their manuals. <S> Below one from a fluke manual Input impedance for AC V is 1M Ω in parallel with 100 pF (not including test leads). <S> Input resistance for DC V is 10M Ω, unless changed to 10G Ω ! <S> for 100 mV, 1 V, & 10 V DC ranges The above quote is from an agilent multimeter. <S> Lastly, from a keysight multimeter below <A> SOLVED: <S> I attempted to replicate what the voltmeter was doing to the signal based upon the help received here. <S> I tried connecting the signal pin with ground via 1M resistor, then 10M ohm, then 10M ohm plus a capacitor (which should've been approximating the voltmeter's effect), but none worked. <S> Removing the resistors and placing the 100nF capacitor as close as possible between the two signals did the trick. <S> Strangely, however, holding the cap with my fingers and pressing it on the connections worked, but after soldering it on it didn't. <S> Tried it three times to no avail. <S> The difference in connection location was just millimeters, so it was very strange. <S> However one difference between the manual testing and soldering the cap onto the board was the length of wires on the cap. <S> Leaving it full length and soldering it on works. <S> So seems like those extra couple of cm of wire and a well-placed cap was what was missing! <S> Although I've understood that in general one shouldn't be placing caps on signal lines, <S> so that's why I didn't try this earlier. <S> Weird that it works, but I'm very glad!
The capacitance may have an effect on high speed logic signals. Depends on the model but you couldn't read it if it was updated faster than, say, ten times a second and measured large changes anyway, would be flickering unreadably.
Parallel Capacitors with different voltages Let's consider a capacitor precharged to 5V. Now immagine to put it in parallel with a capacitor which has no charge. What does it happen? They reach a voltage in the middle? Does it depend on their capacitances? In theory they are both in series and in parallel, so we will get Q1 = Q2 and V1 = V2, but this will mean C1 = C2, that is an absurd. <Q> At the current state of our universe, charge is conserved. <S> (This wasn't necessarily always the case. <S> See this article on dark matter , for example, discussing the possibility that charged particles created shortly after the Big Bang lost their electric charge during the inflationary period .) <S> This means that the sum of two relative charges held by the two capacitors before being connected to each other must be the same as the relative charge of the combined capacitor after being connected. <S> In short, \$q_{_{tot}}=q_{_1}+q_{_2}\$ . Since \$q_{_{1}}=C_{_{1}}\cdot V_{_{1}}\$ <S> and \$q_{_{2}}=C_{_{2}}\cdot <S> V_{_{2}}\$ <S> and also that \$q_{_{final}}=C_{_{final}}\cdot V_{_{final}}\$ ; and since we also know that when two capacitors are placed in parallel that the total system capacitance is the sum of the two original system capacitances, or \$C_{_{final}}=C_{_{1}}+ C_{_{2}}\$ ; it then follows directly that: $$q_{_{final}}=q_{_1}+q_{_2}=C_{_{1}}\cdot V_{_{1}}+C_{_{2}}\cdot <S> V_{_{2}}=\left(C_{_{1}}+ C_{_{2}}\right)\cdot <S> V_{_{final}}$$ <S> It's easy then to re-arrange this so that: <S> $$V_{_{final}}=\frac{C_{_{1}}\cdot V_{_{1}}+C_{_{2}}\cdot V_{_{2}}}{C_{_{1}}+ C_{_{2}}}$$ Note that the energy of the final system is not the sum of the energy of the original two systems: $$\frac12 C_{_{final}}\,V_{_{final}}^{^2}\ne \frac12 <S> C_{_{1}}\,V_{_{1}}^{^2}+\frac12 C_{_{2}}\,V_{_{2}}^{^2}$$ <S> In fact, the total energy will be smaller by this factor: $$\Delta W = <S> W_{_{final}}-\left(W_{_1}+W_{_2}\right)=-\frac12 <S> \left(V_{_1}-V_{_2}\right)^2\frac{C_{_{1}}\cdot C_{_{2}}}{C_{_{1}}+C_{_{2}}}$$ <S> This is true regardless of how that difference is expended. <S> You may want to read this hyperphysics page on where the energy loss goes when charging up a capacitor. <S> This lost energy can be through realistic resistance dissipation as heat. <S> But as you assume a smaller and smaller connection resistance, so that the current is very much larger and the transfer time much shorter, then an increasing amount of the lost energy goes into electromagnetic radiation. <A> The scenario you describe is nonsensical and cannot be analyzed using normal circuit analysis techniques. <S> Suppose you have two ideal capacitors with two different voltages across them. <S> The voltage across a capacitor cannot change instantaneously because an infinite current would be required. <S> So if you connect the two capacitors together with ideal wires then at that instant the two capacitors will still have their original, different voltages. <S> But they are connected in parallel, so by definition <S> they must have the same voltage across them. <S> Therefore, the circuit presents a contradiction and is not consistent with normal circuit design rules. <S> The same situation occurs if you took two ideal voltage sources and connected them in parallel. <S> Of course, in the real world the wires are not ideal and they have some finite resistance. <S> This resistance limits the maximum current when the capacitors are connected. <A> When you connect \$C_1\$ <S> and \$C_2\$ <S> in parallel they will share the same voltage \$V'\$ . <S> A portion of the initial charge from \$C_1\$ <S> ( \$Q_{1_0}\$ ) will flow to \$C_2\$ , so that in the end we'll have: $$Q_1'+Q_2 <S> ' = Q_{1_0} \text{ (conservation of charge)}$$ $$\frac{Q_1'}{C_1}=\frac{Q_2'}{C_2} = <S> V' \text{ (same voltage for two components in parallel)}$$ <S> So, yes, the final voltage will be somewhere in the middle between the initial voltage on the pre-charged capacitor and the voltage on the discharge capacitor (zero Volts in this case). <S> The exact value will depend on the ratio between the two capacitances. <S> Yes, you're right when you say they are both in series and parallel. <S> However, the fact they are in series makes the currents ( \$\frac{dQ}{dt}\$ ) the same absolute value, not the charges themselves. <S> Actually, if you assume the current is positive as it leaves the capacitor, the currents will be the same but with opposite polarity ( \$I_2=-I_1\$ ) as the charge leaves one capacitor and enters the other. <S> Also, when the charge transfer is finished, both currents will be zero.
Furthermore, since there is a "resistor" element between them the capacitors are no longer connected in parallel. When you place two capacitors in parallel, the total charge of the final system is the sum of the two original charges on the two earlier systems.
Is a cosmetic logotype on an outer copper layer a bad idea? I've seen many people add logotypes and other graphics in the copper layers of PCBs, often also removing the solder mask in these places to expose the bare (or rather plated) copper. Is there anything electrically, mechanically or chemically that makes this a bad idea? <Q> Nope, there really aren't much downsides, especially with solder mask openings. <S> The solder mask is electrically quite insignificant. <S> If you'd have the opening on an impedance controlled trace it would have slight effect, but you wouldn't have it there anyway, because it would be ugly. <S> If you have ENIG finish on the board, there's added cost, because there will be gold on the logo as well. <S> You can avoid the cost by placing the logo on a place where there's no copper under the solder mask, but then it isn't as nice and flashy. <S> Etching the logo to copper might have some effect on EMC, depending what's underneath. <S> If there's a ground plane on the next layer, then you most likely don't have any issues. <S> The modern etching processes are quite impressive so with copper you can get a bit more details than with solder mask, but it might come with added cost, if the board doesn't otherwise have thin traces on that layer. <S> And there's also the added cost of added gold with ENIG finish. <A> The answer here is common sense. <S> If the logo is placed on some previously empty space and its presence does not require any component placement or routing change there is no harm in doing it. <S> If you're already using ENIG, by all means open the solder mask and make the logo shinny. <S> I doubt that the PCB fab house will charge you more for the extra gold used. <S> If there is no plating restrain from the idea of leaving exposed copper - that's never a good thing. <S> If the logo disrupts a ground that is critical for shielding or for controlled impedance traces, or if the logo presence risks creating an undesired coupling path, resist the temptation to do it. <A> We do this in our designs so that we can save money by totally avoiding silkscreen printing. <S> The text as such will be small, isolated from all other sections in the circuitry and doesn't create any trouble with EMI EMC. <S> Ofcourse this wouldn't be placed some where close to a impedance matched tracks but rather in a place which is easy to read but without harming any other sensitive sections.
You can't make fancy or small graphics with solder mask, because thin lines don't stick to the board well. Functionality is way more important!
How is power consumption from a battery controlled? Sorry if this is a amateur question, but I don't understand how to control power from a battery. If you just wire it up to the load, it seems it will draw the fastest possible current, possibly burning out the device. A lead-acid battery is powering an electric motor. The question is, how to throttle the power so that I can run this motor at different speeds? Example: 62 cells in the battery. Each cell is rated for 5,530 amps over 68 minutes, or 565 amps over 20 hours. Average voltage is 2.0 V. I want to be able to choose the fast or slow discharge rate. What is the device that allows me to do this? Btw I took those numbers from a submarine example, Type XXI , but you have to go to the German version and use google translate to find the good data. <Q> The load will only draw the current it requires. <S> Taking something a bit more modest than a submarine, the starter in my car will draw a few hundred amps from the 12 volt battery when starting the engine, but the headlights will only draw 5 amps each, and the interior light will probably draw less than 1 amp from the same battery. <S> To control the speed of an electric motor, we often use Pulse Width Modulation (PWM) which turns the power off and on rapidly so the motor effectively sees a lower voltage when you want to run it slower. <S> Alternatively, the motor could have separate windings for high and low speed - I have windshield wiper motors on my boat that work that way. <A> Power is seldom controlled. <S> Power has two components. <S> Electrical power from a battery is voltage multiplied by current. <S> You can control voltage or current relatively easily, but it is difficult and generally not desirable to control both at the same time. <S> Mechanical power from a motor is speed multiplied by torque. <S> Here again, you can control one or the other relatively easily, but controlling both is neither easy nor desirable. <S> In general, motor torque is controlled by controlling the supply current. <S> If you control the torque, the motor run as fast as the necessary to get to the speed where that torque will no longer accelerate the load. <S> If you control the speed, the motor will supply as much torque as it can to get to the speed that is set. <S> At that point, the torque will be whatever is required to sustain that speed. <S> From the standpoint of controlling the load, speed control is often desirable, but it also necessary to limit the current. <S> The controller can be designed to allow a set maximum current, but reduce that current when the desired speed is reached. <S> That is similar to the way we drive a car. <S> We press the accelerator down to produce a comfortable rate of acceleration then back off the acceleration when the desired speed is reached and modulate the accelerator to maintain an even speed. <S> From the standpoint of electronically controlling a commutator-type DC motor, pulse width modulation of the battery voltage sets the voltage applied to the motor. <S> The motor will draw current depending on the operating speed and the characteristics of the load. <S> The controller needs to monitor the current and adjust the voltage modulation to keep the current within a safe limit. <A> I = <S> V/R <S> so the current is limited by the resistance, both internal (all batteries have some) and external: <S> the wires and device or motor connected to the battery terminals (which all have a non-zero resistance, unless they are extremely cold superconductors). <S> Motors (inductors moving under a load) generate reverse EMF, which counteracts some of a batteries voltage across the motor, also reducing current. <S> You can add resistance. <S> As well as rapidly open and close a switch, thus pulse-width modulating <S> the average current over time. <A> These could have been 500 lb Telco type 2V cells with 24 x for 48 V or here 62x2V=124V x 6 banks Type XXI Electric Boats372 cells <S> 44 MAL 740 (33900 Ah) <S> As in transformers, speed would be controlled by tap voltage changers to the battery packs.
Motor speed is controlled by controlling the supply voltage.
Temporary but hands-free JTAG/SWD connection I have a PCB that will have very little clearance on both sides so that putting a normal Cortex SWD .50 spacing connecter is too big. However, I'd still like to have the option of connecting via SWD to the target if the need arises for debugging. Since we're talking debugging and not just programming, holding pogo pins against the target is not feasible and I don't want to build custom jigs for every little board I make. I know that Tag-Connect exists but there's no way I'm going to spend that kind of money on cables. Are there any easy to implement solutions that allow a temporary but hands-free connection to a board for debugging? Preferrably with just a footprint and no actual components on the board. <Q> Is your design price constrained? <S> For SWD we need only two wires and a ground connection, so this could be possible solution. <S> This doens't need any soldering. <S> https://www.mill-max.com/products/new/zero-profile-bottom-entry-receptacle <S> For production version, this can be dropped totally. <S> Second suggestion <S> Zero soldering and zero investment for production. <S> I am sure I have also seen fine pitch edge connectors too. <A> An idea I've read somewhere long ago (on a Hungarian site I think): Use a linear plastic connector with protuding metal pins (male), and design the PCB to be slightly out of line. <S> That way you can push your connector into the holes, and the elasticity of the plastic keeps your pins connected. <S> The original description was about a 6x1 connector with 0.1" spacing to be used for Atmel ISP. <A> There is no reason at all that you have to follow the official header. <S> I never use them, certainly not the .50 inch versions. <S> I have used .50inch <S> or 1mm 'smd fingers' at an edge of a board which allows me to solder a .5 or 1 mm SIL connector on it which then goes to my adapter cable. <A> Design the schematic and board with a header for debugging. <S> It's a solder-on header, and it will provide you with a hands-free connection for debugging when you need it. <S> At the same time, don't populate the connector in production. <S> That saves you the cost of the connector. <S> When you need to debug a unit, solder the header to those pads. <S> It will give you the hands-free connection for the debugger/ <S> You will be debugging only a small fraction of the units: failed units from the field, pre-production units. <S> Or, you can have separate programming pads for the bed of nails. <S> You can use the same type of header and pinout as the in-circuit debugger. <S> Or, choose a different header and create your own pinout. <S> [See also @Oldfart's answer .]
You can use the pads of this connector for programming with a bed of nails fixture in production. You will not be debugging every single production unit. Your choice of header may be smaller or cheaper than the debugger's.
Can I make a ammeter using a voltmeter and ohmmeter that would work in parallel to the device? I had this small question today when working with an ammeter. Since an ammeter must be in series of the circuit, I had to constantly break the circuit to take current readings. Whereas, when using the voltmeter, just placing the probes on the device to take an immediate voltage reading works. Since all devices follow ohm's law, assuming a purely resistive circuit, can't I place a voltmeter and an ohmmeter in parallel to a device to take its current reading? If yes, why don't we do this often and use ammeters? If no, well why not? <Q> Ohm meters work by injecting a small current into the device and reading the resulting IR drop. <S> So if something else is also forcing current through the device, that would disrupt the ohm meter’s reading. <S> Unfortunately, since you don’t know what the current is then you can’t determine that resistance either. <S> So you can’t solve for the current. <S> Many designers will insert a low-value resistance in line with the power supply to make it easier to measure current. <A> ... can't I place a voltmeter and an ohmmeter in parallel to a device to take its current reading? <S> Voltmeter across the load is no problem. <S> Ohmmeters need to inject a known current into the system and measure the resultant voltage drop. <S> You can't do that on a live circuit as you will be feeding the supply voltage into the ohmmeter. <S> A simple solution: simulate this circuit – Schematic created using CircuitLab Figure 1. <S> The ammeter can be connected, when required, across the normally closed switch, SW1. <S> Pressing SW1 will cause all the current to flow through AM1 and a reading can be taken. <S> AM1 can be disconnected when SW1 is released. <A> By using Ohm's law you should be able to determine the current flowing through a given resistor, by determining its resistance (before applying any voltage or current, see hacktastical's answer) and the applied voltage. <S> Keep in mind, that for statistically correct readings you now have to take the accuracy of 2 different values in mind, instead of just one. <S> Depending of the accuracy of the 2 individual devices this can make your calculated current much less accurate than a simple reading in series. <S> Also don't use this for any non-linear parts. <S> If there's any change of inductive of or capacitive behaviour, this method should be avoided. <S> Use an ammeter in series AND a voltmeter in parallel instead.
For currents over an amp or so a clamp-on DC ammeter can be used. If you have a known resistance inserted in line with your circuit, you can measure the IR drop across that and determine current without breaking the circuit.
Charging a lithium battery to 4.34 V I recently came across a device (a headlamp) with lithium-polymer battery, which is marked 3.8 V nominal voltage, instead of usual 3.6-3.7 V. It's charging circuit is based on ME4057D chip, which is a 1 A lithium battery charger. The suffix -D in the chip's name indicates a variant which, according to the datashaeet, charges the battery to 4.34 V, instead of normal 4.2 V. I would like to reuse this battery along with it's charging circuit in some of my projects, but I'm concerned about safety, because I always heard charging lithium batteries to voltage higher than 4.3 V can be dangerous. Complete marking of the battery is: WT 902554 3.8V 1600mAh, but I didn't manage to get a datasheet. My questions are: Is the 3.8 V nominal voltage something really unusual, or there are some special types of lithium batteries, where such nominal voltage is normal? Can be charging to 4.34 V dangerous or significantely decrease lifetime of the battery? <Q> The maximum a battery can be charged is determined by the chemistry of the battery. <S> For a lithium-polymer battery the charging curve looks like this: Source: <S> https://batteryuniversity.com/learn/article/charging_lithium_ion_batteries <S> You first give the battery a constant current determined by the cell, then a constant voltage of 4.2 volts. <S> Can be charging to 4.34 V dangerous or significantly decrease lifetime of the battery? <S> If you go to a higher voltage, it will reduce the battery lifetime or cause a failure, don't do it. <S> If you need a voltage high than 4.2 volts use a DC DC boost converter. <S> Prolonged charging above 4.30V on a Li-ion designed for 4.20V/cell will plate metallic lithium on the anode. <S> The cathode material becomes an oxidizing agent, loses stability and produces carbon dioxide (CO2). <S> The cell pressure rises and if the charge is allowed to continue, the current interrupt device (CID) responsible for cell safety disconnects at 1,000–1,380kPa <S> (145–200psi). <S> Should the pressure rise further, the safety membrane on some Li-ion bursts open at about 3,450kPa (500psi) and the cell might eventually vent with flame. <S> Source: https://batteryuniversity.com/learn/article/charging_lithium_ion_batteries <A> This would imply that the cell is a high-capacity Li-ion (a type popular in cellphones), so that would also imply <S> you could safely charge it to that level. <S> That's a lot of weasel-words. <S> The scary part is relying on the good faith of an offshore manufacturer who made the lamp to have always procured the appropriate cell for 4.34V end-voltage. <S> Since the cell is a commodity item, compared to an engineered Li-po pack in a phone, you can't really be certain, can you? <S> What to do then? <S> Limiting the charge to 4.2V will avoid the uncertainty, while getting more charge cycles out of the battery. <S> You'll be trading off ultimate capacity, but it is a prudent choice out of an abundance of caution, given the apparent confusion over the battery's actual characteristic. <S> Here's a relevant discussion. <S> Why are 3.8V lithium-ion batteries used in mobile devices, rather than 3.6V or 3.7V batteries? <A> Different LiPo cell designs have different end-of-charge voltages and end-of-discharge voltages. <S> There are "high voltage" cells that are designed to be charged to higher voltages, and have higher capacities as a consequence. <S> Even for a given cell design, a system designer can make tradeoffs between cell life (the number of times that the cell can be charged before it dies) and the voltage limits. <S> For example, if you have a cell that's rated by the manufacturer for 4.2V charge and 3.2V discharge, you can get more life (I can't remember how much, IIRC 10 or 20%) by limiting it to 4.1V and 3.3V -- <S> but you get lower effective capacity. <S> As mentioned, you're trusting the light manufacturer to have done the right thing rather than just tossing a bunch of parts together and selling them quick.
Overcharging Lithium-ion Lithium-ion operates safely within the designated operating voltages; however, the battery becomes unstable if inadvertently charged to a higher than specified voltage. The chip from the original circuit charges to 4.34V as stated.
Problem with a relay decoupling voltage I made a simple circuit to alternate its output between two inputs: a photovoltaic panel and a 12V DC voltage source.The schematic: The step source represents the photovoltaic panel and the two resistors represent two bilge pumps of 12 VDC 30 W. It's done in a way that the normal closed contact is fed by the net. But when there's sun, there is voltage and then the relay (12 V rated) changes its contacts. Pumps are then fed with power coming from the panel. The problem that I got, was that the release/decoupling voltage (voltage to change the the contacts back to normal) isn't the same as the operating voltage of the relay. The relay comes back to normal when the voltage is approximately 3 V, so when it's going dark and the panel is generating between 3 V and 12 V, my pumps aren't going to work properly or at all. They have to be ON 24hrs/day. Only when it reaches so dark that the voltage is lower than 3 V do the pumps get fed with power from the net again. What can I do to solve this problem? Is it possible to keep the relay strategy or do I have to change it to something else? Thanks in advance. <Q> simulate this circuit – <S> Schematic created using CircuitLab <S> In the circuit above the transient between the solar panel and the 12V DC power source will be smooth and at about 11.0 V (assuming a 0.5 V voltage drop). <S> You could play with the transition point by adding more Schottky diodes in series with D2 and D3. <S> A drawback is that each Schottky will dissipate power. <S> If possible, you could use an ORing configuration for each pump. <S> This will reduce the power dissipation a little. <S> Illustration the right circuit dissipates less : The current per pump is <S> 30 W / 12  <S> V = 2.5 A. Using for example <S> the VS-12TQ035-M3 for every Schottky diode, the voltage drop per diode in the left circuit will be about 0.45 V @ 5 A, so a 2.25 W dissipation. <S> In case only D1 is conduction, the wasted power is thus 2.25 W. Using the right circuit, the voltage drop 0.35 V @ <S> 2.5 A, so a 0.875 W dissipation per diode. <S> In case only D4 and D5 are conduction, the wasted power is <S> a little less: 2  <S> * 0.875  <S> W = 1.75 W. <A> Indeed, relays have a bit if hysteresis. <S> They will require a certain voltage to turn on, but will not turn off again until the voltage is significantly lower. <S> Operating a relay in this "mid-region" of coil voltage is not recommended, as the contact force may not be enough to provide a good connection. <S> This may significantly affect the relays current carrying capability negatively. <S> Also, the "release" voltage may not be stable as, among other things, microscopic fusing may occur on the contacts requiring a higher force to pull them apart (lower voltage). <S> You can not rely on this in a well designed circuit. <S> For a setup like this you should construct a voltage comparator circuit that drives a transistor to provide either the full 12V when on, or 0V when off, to the relay depending on the measured voltage. <S> This circuit should also have some hysteresis, or the circuit may oscillate near the transition voltage causing premature relay failure. <S> There are also solid state solutions that will require no mechanical relay. <S> To find example circuits and even pre-fabricated modules, do a google search for voltage comparator relay. <S> Add in terms like module, hysteresis and schematic to find more specific results. <A> I also faced the same problem while designing one of my circuits. <S> Ronald has pointed out some good ways to resolve this. <S> But if you are not much concerned about the life of the relay here is a what you can try. <S> Instead of powering the relay directly from the panel, use a series arrangement of Light Dependent Resistor (LDR) and a 1k ohm potentiometer. <S> A typical 12V relay has a resistance of around 400ohms. <S> Now adding more resistance to the branch may or may not provide enough current from the panel to pass through relay in order to turn it ON. <S> Firstly you can check out this point with your circuit by setting the potentiometer at 0 ohm. <S> If there's no problem in the switching just adjust the potentiometer and use the circuit. <S> But if it's not switching even in daytime you can replace the 12V relay with a 5V relay. <S> Or you can increase the supply voltage to the relay by using some other voltage source in series with the relay. <S> However the latter approach is not recommended because of the extra hardware and power requirement. <S> A typical LDR may have resistance range from less than 100ohm in the sunlight to 1Mohm in the dark. <S> So as the sun goes down the resistance of the LDR increases exponentially. <S> Even if it's inside a well lightened room in the daytime <S> it's resistance can be more than 1000ohm. <S> This circuit, by adjusting the potentiometer, should cutoff the supply from the 12V DC source at an acceptable light intensity range. <S> The method is not full proof and I have not tested it <S> but you can play with the values and try something different. <S> However if you can use a microcontroller instead, it would be easier to resolve the issue.
Another option is using ORing diodes instead of the relay.
What is the value of the opposing voltage created by the inductor? Assume I've connected an inductor to a DC or AC source. The voltage across the inductor due to the source current would be L di/dt. But we know that the inductor induces/creates a voltage to oppose the source voltage. My question is, what is the value of this opposing voltage. Surely it can't be L di/dt because then the opposing voltage and source voltage would be equal thus making current to be zero, right? <Q> The current is zero but only instantaneously. <S> Inductors give graphs of voltage that upward slope as DC voltage surges and then gradually reaches its set level (which it technically never reaches but gets continually closer). <S> The graph starts at 0V <S> and then ascends. <S> It doesn't immediately leap up to max voltage like a square wave <A> If you connect an ideal inductor across an ideal DC source, the current is initially zero, and rises linearly with time. <S> The voltage, of course, is the applied DC voltage, so the current rises as di/dt = <S> V/L amperes/second. <S> With an inductor that has some DC resistance, the current does not increase indefinitely but exponentially approaches the applied voltage divided by the resistance with time constant \$\tau = L/R \$ . <S> If it's a superconducting coil that has zero DC resistance, it increases linearly until it hits the critical current and then resistance appears. <S> If sinusoidal AC is applied, the steady-state current is the voltage divided by the reactance, which is <S> \$X_L = <S> \omega <S> L\$ . <A> Surely it can't be L di/dt because then the opposing voltage andsource voltage would be equal thus making current to be zero, right? <S> $$\boxed{\text{It's like trying to solve this: }\dfrac{0}{0}}$$ <S> When you apply a steady supply voltage (for example) across a perfect inductor, the back emf equals the applied voltage and, it remains equal to that applied voltage for the length of time that the applied voltage is connected. <S> On the face of it that appears to prevent current flow into the inductor (which is what I think you are alluding to). <S> This of course appears problematic hence <S> , I believe, this is the reason for your question. <S> For instance, how can current flow into an inductor when applied voltage and back-emf are equal; we know from the inductor formula that for an applied fixed voltage, V, the rate of change of current is: - $$\dfrac{di}{dt} = <S> \dfrac{V}{L}$$ <S> But, this seems to be at odds with the back emf exactly equalling the applied emf. <S> But, consider what is happening; the back emf and the applied voltage are both constant values and, they are across a zero ohm impedance hence, the current cannot be defined other than saying: - $$\boxed{\text{The current is }\dfrac{0}{0}}$$ <S> Why did I say that the impedance is zero? <S> In other words, you cannot use the value of applied voltage and back-emf to calculate the current through the inductor. <S> The current rising or falling delivers the back-emf <S> and that's it. <S> What if the applied voltage is <S> AC - well the difference in voltage between applied and back-emf is still zero <S> and it appears across an impedance of zero ohms <S> so: - <S> $$\boxed{\text{The current is still }\dfrac{0}{0}}$$
Answer: it's an inductor and, the back-emf is induced in series with that inductor hence with the applied voltage and back-emf voltage being equal, there is 0 volts across the "true" inductance and hence no spectral content across that inductor and, it therefore has to be represented by 0 ohms.
Can a microwave motor be powered by usb? Firstly, I'm not even a beginner. I'd need extensive training to become a novice!! My problem, I'm trying to build a rotating display for some decorative coins. I initially thought of servo motors and YouTubed how to connect one to a usb cable but then I heard that they "click" and I want a silent motion. A friend suggested microwave motors but wasn't sure if a 5v usb would actually be "man enough" to operate one. Can anyone tell me if it is possible, please? <Q> Microwave motors that I have seen are AC motors. <S> There are very small motors with attached gears that might be suitable. <S> They are sold for hobby and educational use. <S> Most of the power is lost in the motor and gear. <S> The electrical input power will probably be more that twice the mechanical output power. <S> A 5 volt USB supply may or may not be able to supply enough current. <S> You will need to look at what is available. <S> Sellers of inexpensive motors often don't supply clear information about the current required. <S> Sometimes they just state free-running (no load) current and stall current. <S> For a gear motor with the speed you need, the safe continuous current will probably be 2 or 3 times the no-load current. <S> If the load is too much for the motor, it will run slow, draw too much current and overheat. <S> The stall current will be quite a bit higher. <S> The power supply will need to supply the stall current for a fraction of a second when the motor is turned on. <S> That could cause a "smart" power supply to switch off. <A> If you do a google search for "DC gearmotor 5v" you'll find hundreds of much more suitable options, perhaps such as this one . <A> USB power is really quite limited. <S> However, USB powered <S> turntables for light loads are available quite cheaply. <S> (Example Amazon link )
As well as motors made for your particular application, if you can go to 12 volts you could look at the motors used for rotating reflectors on automobile warning beacons or disco lights. A microwave motor would probably require more tinkering than at first appears, if it could be made to work at all.
Selecting a damping resistor for crystal oscillator circuit I'm building a data acquisition board with an ADAR7251 ADC. I'm trying to figure out the crystal oscillator circuit, and am stuck on the damping resistor. The relevant datasheet is https://www.analog.com/media/en/technical-documentation/data-sheets/ADAR7251.pdf , page 21. I'm planning on running the crystal at 19.2Mhz. I understand how to determine the loading capacitors based on the load capacitance of the crystal, and know I should be selecting the resistor to achieve the desired drive level of the crystal. However, I have no clue how to go about determining the correct resistor value to achieve the desired power level. I've seen other examples (such as https://www.crystek.com/documents/appnotes/pierce-gateintroduction.pdf ) of finding the resistance values when the feedback resistor is known, but I can't find it specified anywhere in the ADAR7251 datasheet. How should I find the damping resistor value? Here's the schematic from the ADC datasheet, it's pretty straightforward: Thanks for the help! -Seth <Q> This applications note from IDT has you doing it by the simple expedient of using a current probe. <S> I don't even want to think of the $$$ involved in that. <S> I think this can be calculated. <S> What matters is the crystal current. <S> At resonance, a crystal has inductive reactance, at a frequency to resonate with the load impedance. <S> So if you use the following circuit as a model, things should work. <S> Set C2 equal to your actual capacitor value choice (i.e., 27pF). <S> Set C1 equal to the actual capacitor plus the expected input pin capacitance (5pF, for a total of 32pF). <S> Set L1 to whatever resonates the series combination of C1 and C2 at your crystal frequency, and adjust it if it's not quite right in simulation. <S> Just take a wild-ass guess at R1 (1k-ohm -- <S> what could go wrong?) <S> Simulate the circuit as an AC model. <S> Sweep the frequency around resonance. <S> Find the frequency at which the phase shift from Xout to Xin <S> is 180 degrees (this assumes that the amplifier in the chip has zero degrees phase shift). <S> For that frequency, look at the current in L1 -- that's your guestimate of the crystal current. <S> Now adjust Rs, simulate, and repeat until you've got the crystal current that the manufacturer allows. <S> The nice thing about this (for me) is that it should overestimate the crystal current -- so as long as your real circuit actually oscillates strongly, then your crystals should enjoy a long life. <S> In real life I'd build a test article and try it with my calculated Rs, then with successively larger values of Rs until I noted that it stopped oscillating. <S> If it works at room temperature with an Rs of twice my calculated value, I'd feel comfortable in using my calculated value. <S> If it works at room temperature with an Rs of 10 times my calculated value, I'd feel really comfortable in using my calculated value. <S> If it doesn't oscillate with Rs at my calculated value, I'd hit the boss up for the cost of renting an RF current probe... <S> simulate this circuit – <S> Schematic created using CircuitLab <A> but it could be perhaps 600mV which would result in much less dissipation. <S> A rough starting point is the added resistance will probably be similar to the reactance of a load capacitor. <S> Crystal manufacturers typically either avoid the issue or tell you to measure the RMS current Irms directly through the crystal using an expensive oscilloscope current probe, and adjust the resistor to be sure that \$I_{RMS} <S> ^2 <S> \cdot <S> ESR \le <S> DL_{MAX}\$ . <S> Here ESR is the maximum ESR of the crystal from the datasheet and DLmax is the maximum drive power, again from the crystal datasheet. <S> If you set the drive power too high, the resulting oscillator may be unreliable or the crystal may be damaged to the point where it fails entirely. <S> If you set it too low, it may fail to start reliably, especially at temperature extremes. <A> The chosing of this resistor is complicated ----- because the resistor serves two purposes, not one purpose. <S> 1) <S> the adjustment of crystal-drive current level 2) the adjustment of loop phase-shift <S> In (2), different oscillator amplifiers will have different delay times. <S> A 1 nanosecond delay at 10 MHz is 1% or 3.6 degrees. <S> Barkhausen in a high-Q situation will likely not be satisfied, and you get no oscillation. <S> The resistor is the major delay tweak.
The resistance is not straightforward to calculate because there is missing information (in particular the voltage across the crystal is not known, and depends to some degree on the design of the oscillator- less than the power supply voltage
How to interface a C64/Atari joystick port with the ESP8266 (level shifting from 5 to 3.3V)? I'm working on a device based on an ESP8266 (Wemos D1 Mini) that shall be connected to a C64 or Atari 8-Bit computer's joystick port. At the same time, a normal joystick could be connected (signals are passed through).The device should then both be able to listen to the joystick movements as well as also be able to simulate a joystick. In my current setup, I have taken the risk of hooking up the ESP directly to the joystick cable, despite the fact that the ESP should not have 5V as input on the pins. So far it has been running for a few days now without a problem, but I want to make it right and apply the correct voltage. If possible, this should be a voltage divider done with resistors because I want people to be able to build this device on a bread board. My problem now lies in the fact that the joystick port constantly delivers 5V on the direction pins. For a voltage divider to work, I would have to have, say, a 1k and a 2k resistor leading from 5V to GND, and then tap the signal between the resistors and connect the resulting 3.3V to the ESP.However, the joystick only connects the corresponding wires to ground if it is moved to that direction. If I understand things correctly, that would mean if the joystick is in idle position, none of the 5V pins are connected to ground and thus there is no voltage division. As a result, the 5V from the joystick port would then again be applied directly to the ESP's pins. On the other hand, if I make a bypass after the two resistors to GND for the voltage divider to work, the C64/Atari would detect that the direction pin has been connected to ground and thus interpret this as the joystick's move in that direction. If my thoughts are correct, what other way could there be to bring down the joystick port's current to 3.3V? Thanks in advance for any ideas! <Q> On the C64, all pins (up, down, left, right, button) are pulled high internally by resistors and will show 5V. <S> When a classic joystick is moved, it will mechanically connect the corresponding pin to GND, forcing it to 0V. To emulate this from a microcontroller you can use either: An NPN transistor with emitter to ground, collector to joystick pin, base to the microcontroller output via a resistor A N-channel logic-level MOSFET with source to ground, drain to joystick pin, gate to microcontroller output <S> Replicate this circuit for each of the 5 signals and <S> you're good to go. <A> All you need is 2 resistors and the FET, for each data pin. <S> It limits the voltage to 3.3V microcontroller and it can still pull the 5V side down. <A> Here's a way you could isolate the C64's +5V from the ESP8266, and still be able to 'listen' to the joystick switches. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> When the joystick switch or ESP output pulls to Ground, Q1 is turned on through R1. <S> When the joystick switch is off The transistor's Emitter is pulled up to ~2.7V by R1, which should be high enough for the ESP8266 to detect as a high logic signal.
Make a "FET level shifter" from discrete components.
So why do battery chargers need two prongs? Why not just one? Electricity flows from area of highest electric potential to lowest, I get that. What I don't get is why any battery charger would use two prongs on an outlet (or a few other items of similar purpose). Hypothetically, it's a below fully charged state that's filling up. It should just have an intake and then once it's full, flip that off. I've often been taught to envision electricity flow similar to water flow, flowing downhill from high potential to low potential... but when filling a bucket, we don't put holes in the bottom of buckets, so why do we need outflow for battery chargers? <Q> The water/hydraulic analogy for electricity is OK but honestly not that great <S> given how popular it is and this is one of the big areas where it fails. <S> When you charge a battery you aren't filling it up with charge like a bucket. <S> The battery stays electrically net neutral. <S> By charging it you are driving a chemical reaction between ions in the electrolyte and the electrodes. <S> If you had a capacitor instead of a battery you would be building up equal and opposite charges on the plates, but the net charge would be zero. <S> Current always flows in a loop (Kirchhoff's law) <S> so if you want to think about water pressure consider a closed hydraulic system where you have a pump pushing water out at high pressure and receiving it back at low pressure. <S> Loads can extract work from the fluid due to the pressure difference but they don't store material quantities. <A> Using the water and bucket analogy: it's more like the buckets in a water wheel, where the buckets around the water wheel empty as the wheel turns, but the wheel lifts a weight up higher as it turns, thus storing potential the energy provided by the running or falling water. <S> If the buckets didn't empty, the water wheel would quickly stop turning. <A> ...taught to envision electricity flow similar to water flow, flowing downhill from high potential to low potential... <S> At the voltages typical of electronics circuits that we might encounter (i.e., tens of volts, even the low hundreds of volts) it's better to envision electricity as flowing in pipes. <S> Electrical current is mostly carried by electrons, which are strongly attracted to the protons in atomic nuclei. <S> You can't rip very many electrons off of a piece of wire before they get lonely and want to jump back. <S> All analogies break down at some point, and the "electric current is like water in <S> a pipe" analogy breaks down at the point where water will drip out of a pipe -- electrons won't, in general, drip off of a wire. <S> ... <S> but lightning doesn't flow in a loop... <S> I just did some research on this (strictly Wikipedia, because I'm lazy), and I can't find a reference that directly contradicts this. <S> However, it must. <S> Why? <S> Because electrostatic attraction is hugely more powerful than gravity. <S> So something must equalize the charge between the cloud and the ground after a lightning strike, or there's some mechanism that carries charge from the cloud to the ground before the strike. <S> The lighting strike may not flow in a loop -- but it's the consequence of a capacitor getting charged up, and that capacitor will discharge, eventually.
In a battery cell, the charging electrons flow through, but boost up the energy stored in the chemical potential of the battery as they get "pushed" through by a voltage differential.
Capacitor Selection for high voltage I have a circuit which has about 10 kilovolts input and I want to store that voltage on a capacitor. I calculated and drew the circuit but I am now using an 18 nano farad capacitor at the end. I simulated the circuit in LtSpice and I am getting what I want. However, my common sense tells me that 18 nF is a very low number to store 10kilovolts in the real world. I looked at some specifications and it is actually okay to apply that much voltage to the capacitor but I am curious about is it possible to have 10 kilovolts stored on a capacitor with capacitance 18 nano Farads. <Q> Yes, you can have kilovolts on very small value capacitors. <S> This is a Wimshurst machine. <S> It generates around 30kV. <S> It has no explicit capacitors for storing charge. <S> It has only the inherent capacitance between the two iron bars with the large balls on the ends. <S> That capacitance is some few picofarads. <S> It is still enough. <S> It charges up, and discharges at around 30kV. <S> The spark will jump a gap of 10 millimeters. <S> Since the capacitance is so low, a full power zap from it doesn't hurt. <S> I routinely discharge it by touching the bars. <S> If it had a capacitor of any size on it, I wouldn't touch it. <S> That would be painful to deadly. <S> As others have said, the value of the capacitor says nothing about its voltage rating. <S> The voltage rating depends on the dielectric, how much dielectric is between the electrodes, and the separation between the electrodes. <S> This is a 1 nanofarad SMD capacitor size 0402 rated for 50V: <S> Not even 1mm on a side in any direction. <S> Costs about $0.50 <S> This is a 1 nanofarad ceramic capacitor rated for 30kV: <S> It is 30 millimeters in diameter, and 19 millimeters thick. <S> Costs about $30. <S> Clearly, the capacitance says nothing about the voltage rating. <S> The voltage does however make an enormous difference in the stored energy. <S> A fully charged 1 nf, 50V capacitor can store 0.00000125 joules of energy. <S> A fully charged 1 nf, 30kV capacitor has 0.45 joules of energy stored. <S> The voltage matters, but not the way you thought. <A> But I don't know what you going to do with them <S> so I cant tell 18nF is enough or not. <S> Also you don't want a bigger capacity for that much voltage even these are quite dangerous . <A> The capacitance does not decide how much voltage it can store, what decides the voltage is how well insulated the capacitor itself is to handle the voltage. <S> The capacitance will decide how it charges/discharges and the electric field it can store. <S> I will give you an example, ceramic capacitors can be found on that voltage range, you gotta specify if it is AC voltage or DC voltage <S> you are applying to it. <S> The reason, they are different specifications for isolation. <S> It will be hard to find 18 nF, something like 4.7nF and then connecting around 4 in parallel can function for you maybe(check your simulation with this) <S> example of the part I mention component <S> I mentioned edit: <S> charged capacitors are dangerous, ones charged at high voltages even more so. <S> Design a way to short it to ground without you ever touching it as a safety measure. <A> When designing a capacitor, the maximum voltage is determined by the insulator between the metal plates. <S> But the thicker the insulator, the lower the capacitance. <S> So high capacity capacitors (especially supercapacitors) tend to have very thin insulation, and hence quite low voltage ratings. <S> Low value capacitors can have thicker insulation and a higher voltage rating.
Roughly speaking, the thicker the insulator, the higher the voltage it can stand, though some insulators are better than others. You can use high voltage ceramic capacitors , they can handle 1-30 kV .
Can I use this DPDT relay with this circuit? If so, which wires connect to what terminals? I'm following this tutorial to wire an automatic chicken coop door. The DPDT relay they're using is different to my relay so I'm finding it difficult to understand which wires connect to where. I'm completely new to all things electronics, could somebody help me please? Here's the wiring diagram for my relay: Here's the full wiring diagram on the tutorial: The sensor I'm using is also different to the tutorial, this is: https://www.amazon.co.uk/gp/product/B00BU78GX0] This question is different from my previous question because I'm asking about the differences between the relay used and the one I have. I'm not asking about splicing wires, which got an answer which I marked as the correct one. Thanks. <Q> According to the linked ad my comment was incorrect. <S> Input control signal voltage: <S> 0V - 0.5V Low stage (relay is OFF),0.5V - 2.5V (unknown state).2.5V - 24V High state (relay is ON). <S> <-- It takes a 12 V control signal. <S> Input control signal high state current:2.5V: <S> 0.1mA5V: <S> 0.35mA12V: 1.1mA <-- It will draw 1.1 mA on the control input.20V: <S> 1.9mA <A> While the bare relay linked in the comments is the device the tutorial is using, the board you have already purchased should work just fine if wired correctly: Edit: <S> Note that I only connected the ground of the sensor to the second GND terminal of the relay board to keep the diagram tidy. <S> The positive and negative inputs to the sensor can be connected anywhere along the power bus, and do not have to be directly connected to the relay board. <A> I hope that you didn't mess up your relay trying it all these different ways. <S> The answer is so much simpler than you think. <S> But first, You should wire the Actuator to the commons. <S> Makes it easy to replace, do maintenance or reverse the wires. <S> Dump the three wire sensor and just use a single photo cell. <S> The first two Images are the schematic current flow to illustrate how it works and that it does work. <S> Top one is day time and the middle is night time. <S> The third is the practical wiring diagram. <S> The protection of the Photocell is simple also. <S> Use a small glass or plastic jar and run the wires through the cap. <S> Get a few of those moisture packets that come in over the counter medication bottles and throw then in there. <S> Cap and mount the jar. <S> Simple. <S> Hope this helps.
You should be able to make it work.
Does this connector exist and if so what is the correct name? Please see picture. 2.54mm pin pitch. 8 pins, 8 'ways'. So, "1x8" I think is the right way to state it? I am trying to find a connector / header / whatever-name-is-correct to change direction 90 degrees in the vertical direction rather than on the board-plane direction like is commonly seen and easily found by searching for 90 deg headers. I have searched using many search terms including 'upstand' 'vertical' '90 deg' 'board to board' and more that seemed like 'maybes'. Scrolling through many hundreds of pictures with no luck, so am desperate now and posting here. I have seen board to board connectors like what I want in older electronics (a TV set, an ancient copier) though the ones I saw did not have a plastic housing at the pin end, only at the 'ways' end, and those were both 16 or more pins not 8 like I need. Please tell me the correct name for such a connector, if they exist? <Q> In my almost 40 years of tearing apart electronics, I have only seen what you are specifically asking for once or twice in the SIP form-factor, and each time it was a custom OEM part that wasn't available on the open market. <S> They still appear to be available, at least to some limited degree: <S> So, I guess the proper term would be "8 pin 90 degree vertical SIP socket", <S> but like I said, such a thing does not exist on the open market as far as I can tell. <A> You will probably find that you have to make that, I have often seen flat cable used and it is usually folded neatly to achieve the direction change and limit the space needed. <S> So, as per my first line above, ie "making that", you could get two plastic headers, remove the pins then bend wires for each position and push them into place. <S> A small vise may help in keeping the wires under control. <A> I have never seen one of those. <S> I am afraid you have to make your own. <S> 1/ <S> You could try to get a very long wire (e.g. wire wrap) <S> version and bend the wires. <S> 2/ <S> Bend the pins of one of your shown connectors and solder them to a 8-pin header.
Although they are rarely seen these days, 90 degree DIP sockets were at one time popular back when discreet non-multiplexed LED display modules were common.
Why does a pull down resistor eliminate floating input Ever since I connected a button to an Arduino for the first time, I've wondered how does the resistor prevent floating input from happening? I've read several answers to similar questions, seen videos and read forum posts: https://www.quora.com/What-is-a-floating-input-gate "We say it's left to float, which means it's in an indeterminate state: maybe high, maybe low, maybe somewhere in the middle. Worse, it may even change depending on other conditions in the environment." This just explains what it is. Which is fair, since that is what the person asking, asked for. https://forum.arduino.cc/index.php?topic=378402.0 Same as above. https://www.youtube.com/watch?v=wxjerCHCEMg This video goes into a decent amount of detail about what causes floating input, i.e. outside electrical sources, and how to solve it, but does not address why adding a resistor eliminates this noise. Everything I read on the internet just answers the same questions over and over again. I already know why you use a pull-down resistor ( or a pull-up resistor), to eliminate floating input and to prevent a short (in the case of a button). I already know that floating input is a result of electrical noise. But if someone asked me "so why does adding a resistor make the floating input go away?", I don't know how I would explain it to them, other than "it just does". I wasn't sure whether I should post this question here, or in physics.stackexchange. I feel like this question is well within the domain of both sites, so I just picked one. Edit: I do not feel this is a duplicate of What is a pull up and pull down? Because even though the information I am after is within the scope of that question, it does not provide it. I am not asking what a pull up or pull down is. What I'm asking is why does a resistor in a pull up / pull down eliminate floating input. <Q> It's very simple. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> A CMOS input and one with a pull-down resistor. <S> The typical input is very high impedance - so high that we can usually assume that no current flows into it in the steady state. <S> The input has some very small capacitance, however, and this needs to be taken into account at high frequencies. <S> The input impedance is so high that the input can be affected by stray or induced voltages. <S> Adding the pull-down resistor discharges any voltage on the input capacitor and the input is held low. <A> The input is floating because it does not have a logic voltage driving it with a valid logic low or logic high. <S> The resistor provides that connection to a logic voltage. <S> It can connect it to a logic high voltage (pull-up resistor) or a logic low voltage (pull-down resistor). <S> If it's the only connection to the input then the resistor does the same job as a piece of wire going to the logic voltage. <S> You will find mountains of text about 'logic signalling voltages' and logic signalling on the internet. <A> Input pins often have a huge input impedance , in the range of mega-ohms. <S> So only very miniscule current can and will flow into or out of them. <S> This means that only miniscule electrical noise is sufficient to change the voltage on the pin. <S> This electrical noise can come from e.g. RF interference picked up from the environment. <S> The pin and connected lines then act like antennas, and the very minute electrical power picked up by these antennas can be enough to affect the pin's state. <S> That's because whatever electrical noise gets picked up has almost nowhere to go, because the pin is so high impedance that it hardly does conduct away the noise at all. <S> Then, in every real circuitry, we have parasitic capacitances and resistances, both internal to the chip and external. <S> This can cause minute charges to flow to the pin and also change it's voltage. <S> The function of a pull-up or pull-down resistor is to conduct away any undesired electrical charge so that it cannot affect the input pin's state. <S> For this, the resistance of the pull-up or pull-down must be significantly lower than the pin's input impedance, which isn't hard to achieve since the input impedance is quite large, see above. <S> In more technical terms, we have a very high impedance source of noise and other parasitic signals, coupled to a very high impedance input.
The pull-up/-down resistor basically gets connected in parallel to the input pin (and in series with the noise source) and reduces its effective input impedance which proportionally reduces the voltage the high impedance noise can induce.