source
stringlengths
620
29.3k
target
stringlengths
12
1.24k
RS485 entering and leaving a PCB For architectural reasons, we have a need to take a twisted pair (of an RS485 bus) into a PCB and then out. The PCB traces would add about 2 cm (just less than an inch) of length to the bus. The stubs connected to this differential trace pair in the PCB are very short (<1 cm). Question: How to match the PCB differential trace pair impedance to that of the twisted pair cable (~120 Ohms)? Given the low bit frequency (about 250kbits/s), the first answer that comes to mind is that the trace pair impedance does not really matter, since the shortest wavelength within the relevant bandwidth is so much larger than the trace pair on the PCB (say, a couple of meters vs. a couple of centimeters). This also appears to be true for CAN buses entering PCBs, given some of the discussions that I have seen in this site. Having acknowledged that, if we still wanted to get as close to 120 Ohms differential impedance with the trace pair on the PCB, how can we compute the trace width and separation? Edge coupled micro strip calculations that one can find on web sites such as eeweb.com incorporate a reference ground plane on which the common-mode currents can return. In our case, it is just a twisted pair (no ground wire or shield) entering the PCB (i.e., no common-mode current return path on the wire), so should we even use a ground plane under the trace pair? <Q> Yes, you do want to keep the trace impedance on the board as close to 120Ω as you can, mainly because this affects how the signal propagates on the cables attached to it. <S> Presumably these devices are being daisy-chained together, and the reflections caused by impedance discontinuities will accumulate to reduce your overall signal integrity. <S> There are online calculators for controlled impedance traces that include your configuration with no ground plane, for example the one from Saturn PCB is particularly flexible. <S> You have some other design issues to consider, however. <S> RS-485 receivers typically have a limited common-mode voltage range that they can handle, so you typically DO need a ground reference in your cable. <A> Parasitic effects start to take hold when you get into the 10's of MHz. <S> Because RS485 is differential, it does not need a ground plane, you could provide shielding if you like to minimize noise. <S> You do need to worry about cross coupling from other signals, so keep that in mind on the PCB routing. <A> Impedance is always defined by geometry of conductors and dielectric with respect to (wrt) each other (DM) and wrt power ground and/or earth ground. <S> (CM). <S> Saturn PCB design.exe. <S> May tell you for a typ. <S> Fibre/epoxy dielectric that 50 Ohms is somewhere between 0.5 to 1:1 diameter/gap and 120 Ohms is higher ratio. <S> A track pair or short wire pair may be 1nH/mm and 1pF/mm where the impedance is \$Z_o=\sqrt{L/C}\$ . <S> These geometric ratios define L,C which in turn defines the Zo. <S> A long cable is closer to 0.5nH/mm and the capacitance varies significantly between a line and a plane — such as a a copper ground or a coaxial shield or a twisted pair with no shield. <S> Ideally the CM impedance if floating ought to be very high so stray capacitively coupled voltage creates very little current and thus any imbalanced impedance results in very low DM noise voltage. <S> But that demands high inductive reactance and small shunt capacitance of the cable — and is risetime or frequency dependent. <S> (Aka Balun). <S> So you will find large Ferrite torroids on all VGA cables, and most high DC power charger cables and many microphone cables. <S> But often better immunity from stray inductive noise demands that the CM impedance be very low — so that the induced CM voltage is low and thus impedance imbalance further reduces the error induced as a DM noise voltage. <S> This low CM impedance effectively better shunts stray motor current CM noise B fields for immunity. <S> Conclusion <S> For >20MBd 1cm stubs result in a negligible reduction of RS485 risetime or increase in ringing, so it may not ever cause bit errors to ignore impedance layouts at <250kBd 10ns= <S> Tr. <S> But for integrity it is always best to use the common ground path close to the data <S> tracks 1:1 and choose that track gaps to match the track width. <S> Yet Saturn will give you the optimal results. <A> Managing with Profibus over decades as field engineer, I had never found some exotic solution. <S> It's always used with DB9 connectors with termination switches or pass through for daisy chaining. <S> By the way, Profibus max. <S> speed is 12 MBaud, RS485 physical layer. <S> The only weird part are PCB inductors, probably needed to limit \$\dfrac{di}{dt}\$ in case of ESD discharge through TVS. <S> Have look at link Since your speed is relatively low, I don't see any need to calculate PCB impedance.
With such a small distance and baud rate, the matching will probably not matter because the speed is low, reflections and parasitic effects will be minimal.
How do I supply 200-ish mA to servo connected with Adafruit's PCA9685? For a project in school I have to build a robot playing a bass guitar. To play the notes I planned to plug 16 servos to Adafruit's PCA9685 but it doesn't supply enough Amps to the servos. How can I power the servo motors externelly but still controll them via the PCA9685? I know that I don't have enough amps because the PCA9685 is able to controll much weaker servos, but these aren't powerful enough. Can anyone help? I use the Raspberry Pi 3 model B+ to control everything and several BMS-410C as my servo motors. <Q> Looking at the application notes <S> The logic on the PCA9685 should be powered from the same supply as the Raspberry Pi (3 - 5V) Vcc and this is low power. <S> The servos can have their own separate supply V+ which can be as high as 12V if required. <S> This can typically be a 5V 10A PSU and is connected to the PCA9685 via its V+ own connection block. <S> Each of your 3 wire servos has a control pin, a power supply pin for V+ and a common ground. <S> They do not have to be powered from the same supply as the logic. <A> You are ok, the servo units you chose use a logic input for control and a seperate power supply will work great if wanted, be sure to connect the grounds. <S> Your problems sounds like the power supply is collapsing. <S> Lets try to figure out what you need. <S> 16 servos @ <S> 600mA = 9.6 amps. <S> 1 <S> Raspberry pi = 2.5 amps (recommended minimum PSU requirement). <S> we now have a connected worse case load of 12.1 amps X 1.2 (safety margin) <S> = <S> 14.52 amps. <S> Round this up to the next pseudo standard power supply rating (depends on vendor) we will use a 15A supply. <S> If you use a battery calculate at 20 amp hour per hour of usage. <S> That gives you room for battery aging etc. <S> Consider placing a CLC filter in series with the 5V to the Raspberry pi. <A> The easiest way is to use a battery with an ESR <<10% of the equiv. <S> DCR or Resistance (parallel) measured of each motor’s coil, then recharge battery during/between gigs. <S> This way the voltage never sags more than 10% on full surge load.
Therefore all you need is a separate PSU with sufficient capacity to power your servos under full load.
Output of LM393 won't swing LOW unless it is pulled up with a LED I built the following circuit using an LM393 as part of a noise gate for audio applications. It's a voltage comparator with hysteresis. I originally connected a 10 kΩ pull-up resistor and then added a 5 mm UV LED to serve as a visual reference. The circuit worked fine like this, however, if I remove the LED and connect the 10 kΩ pull-up resistor directly to the supply, the output doesn't swing low and gets stuck on its high state. I've tried adding 1N4148 diodes in anti-parallel to the pull-up resistor, in reverse from the output node to ground, and I also tried substituting the LED with a 1N4148, but neither worked. Here's a picture of my protoboard with the UV LED: Here are two oscilloscope captures of the input and output signals: <Q> Without the UV LED dropping about 3V, the voltage at your non-inverting input will never drop below your reference at the inverting input unless your input drops below about -1.1V - so as a result the output will never swing low. <S> With no LED there and the input grounded, simple voltage divider calculations show that the non-inverting input voltage will be 4.89V (greater than your 4.24V reference). <S> With an LED dropping 3V and the input grounded, the non-inverting input voltage will be 3.67V. <S> You either need a much larger value pullup resistor, or much smaller R <S> i and R f resistors for your circuit to work as intended. <A> Let's analyze the circuit with the output "high" (open circuit for the output transistor). <S> We have 12V through 10K in series with 470K to the non-inverting input and <S> some voltage W connected through 330K. At the instant of switching <S> , the non-inverting input will be at 4.27V. <S> So the current through the 480K is 12V/480K = 25uA. <S> That means the input <S> W must be at 4.27V - 25uA <S> * 330K or about -4V <S> for it to switch. <S> The situation when the output is pulled up through the LED (forward drop maybe 3~4V) will be 7~8V/480K or >16.7uA through the 480K so the input W must be at 4.27-16.7uA * 330K or about -1.2 for it to switch. <S> In no case should it switch with input W in the range 0-12V <S> once it's in the low state, but it will be a lot closer with the "UV" LED in there. <S> If you stick your fingers onto the resistors at the inputs, there likely be enough mains pickup to get it to switch, especially in the second case. <S> Once it switches, the output will be low, say 0V for simplicity, an input of about about 7.3V will switch it on (just a voltage divider yielding 4.27V). <S> So the prediction is (with LED shorted or an additional resistor 10K from output to +12) on at <S> +7.3V off at -4V <A> As hinted originally by Dwayne Reid and further explained by brhans and Spehro Pefhany, the problem was that with out the LED, the voltage fed back to the input was high (11.94V), the diode drop lowered this to 9.64V which was enough to make the comparator trigger properly. <S> One of brhans' solution was to make Rf and Ri smaller, however, before this problem I had another one where the input voltage increased and decreased suddenly whenever the comparator changed states, and not just by the input signal alone, which was solved by using larger Rf and Ri values. <S> Another solution proposed by him was to make the pull-up resistor larger to drop more voltage in the voltage divider formed with Rf and Ri, however, this resistor will dictate the charge and discharge times of parts that will follow after this circuit's output and that have already been calculated, so I would need to adjust all of that too.
My solution was to simply connect a resistor from the output to ground to divide the output voltage to a lower value, after adjusting a pot as a variable resistor I found that a 47k would be perfect, the working circuit below with high and low voltages:
When using PWM, what is the purpose of having two complimentary square waves on the same channel? I'd like to use the PWM I/O on the SAMA5D2 Series Microprocessor (Microchip).What I'm confused about is why each PWM channel has a high and low output pin. The datasheet specifies Each channel controls two complementary square output waveforms. My understanding is that you only need one of these outputs to drive an external peripheral such as a fan. In what instance would two complementary PWM outputs be used? Also, do I need these two complementary waveforms to drive a 4-wire PWM fan? I've added a I/O description and timing diagram example from the datasheet for clarity. <Q> Imagine you drive something in a PUSH-PULL configuration; then, PWMH can drive the high-side switch, whereas PWML drives the low side switch. <S> Many of these PWM controllers even have a dead-time functionality to guarantee that both switches aren't on simultaneously <A> And dead time insertion comes handy to prevent these two complementary MOSFETS from being short during transition. <S> As you can see in the image, gate pulses to MOSFET1 and MOSFET3 should be complementary, similarly for MOSFET2 and MOSFET4. <A> Regarding having complimentary signals: True complimentary signals are often used for common-mode noise suppression or for other reasons, as mentioned in the comments immediately below the question. <S> However, the diagram provided shows slight differences in timing, with the Low side versions starting later and completing sooner than the High side. <S> As mentioned in another answer, hysteresis or avoidance of simultaneity may be part of the reasoning for the timing difference between the High and Low signals on the same channel. <S> Also, the drawing implies quadrature, but that may just be for the example diagram. <S> I am not familiar with this device, nor with what the PWM interface was designed to work. <S> Answers to those questions may help illuminate the reason for the extra lines, and (if you are lucky) might be discussed in the processor's data sheets or app notes. <S> Regarding a 4-wire PWM fan, I do not believe that both lines are necessary (at least for an inexpensive computer fan). <S> You may this link may be useful. <S> https://www.ekwb.com/blog/what-is-pwm-and-how-does-it-work/ <S> It provides information about the specifics of the wires and a reasonable bit of information regarding the use of PWM in an inexpensive 4-wire computer fan.
Complementary PWM signals can be useful in designing an inverter with full bridge configuration, where you need to drive two MOSFETs/switch complementary to each other.
Via in pad, reflow soldering problem Recently I've designed a PCB for the ESP8266EX-chip. With my amount of knowledge, I thought it would be smart to stitch vias into pads. To be specific, not only the ground pad of the chip, but also the small pin-pads (It's a QFN Chip). After soldering everything using a reflow oven, the PCB wasn't working -> I couldn't upload anything to the chip. So I tried soldering the PCB in a reflow oven without any parts on the solder paste. This time I saw, that the vias suck up almost all solder off of the PCB pads. This could also be an effect because I don't have any parts on the solder paste, but after searching the internet about my problem, it seems that my open vias are the problem. I tried filling the vias with solder paste, and then putting solder paste on the pads, no success. Is there a way to make this work? I received 35 boards from a sponsor (couldn't go lower), and it would be incovenient if I have to throw them away. Soldering using a solder station is not an option, I tried. Looking for a solution on the internet was also not helpful. <Q> Make sure you heat everything enough that the flux is all burned out of the hole. <S> Then, apply the paste, place the chips, and proceed as usual. <S> It often causes confusion: <S> First, the phrase is used to describe putting a standard via in a pad (like you did). <S> This is known to cause problems, as you found out. <S> The other use of the phrase is to use "via-in-pad technology" when manufacturing boards. <S> This is a good thing, which solves the problems you are seeing. <S> The board house will fill the holes with epoxy, then plane them flat, and then plate them. <S> This is what you need when dealing with very-fine-pitch BGAs, for example. <S> This is an extra processing step done by your board house, and costs extra money. <S> It isn't even an option at some low-price boards houses. <S> Another attempted solution that is often seen is that a designer will cover the drill hole with soldermask, which creates "tented vias". <S> The idea is that it prevents solder from wicking down into the holes. <S> Unfortunately, this doesn't work well, either. <S> If the via is tented on the top layer, it lifts the IC up off of the board, leaving a gap. <S> If it is tented on the bottom layer, the vaporizing flux can cause small eruptions, like a volcano, under the chip. <S> These can cause spacing and alignment issues. <S> So, in summary, VIPs shouldn't be used unless you pay for the good ones :) <A> The entire pad can be heated with a regular soldering iron and the paste should flow. <S> In the past I have had issues with via's in pad (sometimes you use many of them for thermal relief). <S> If that portion of the chip did not flow, then I heat up the via with a soldering iron until all the solder flows on that pad. <S> It's best not to use vias in pads unless you need to for thermal reasons. <A> To salvage the boards you have, you could try putting polyimide tape on the bottom of the board to cover the bottoms of the holes and loading extra solder past on the pads with your stencil operation (I think more gap and extra pressure and paste on the stencil).
To try to mitigate your problem, I would recommend filling your vias, before you place the chips, with standard wire solder and a soldering iron. Something to be aware of: the "via-in-pad" terminology is used in two different, conflicting ways.
Possible to insert capacitor between voltage source and output to delay the output? I'm a newbie at this stuff but basically I have a 3.5v coin battery connected to a tiny switch that turns on a tiny buzzer. What I want to do is delay the power going to the buzzer by about 30 seconds. Would simply putting a micro capacitor between the voltage and buzzer be a way to do this? I need to keep the weight of this contraption as low as possible so that it doesn't weigh much more than the battery itself. I tried messing around with various circuit designers but all I came up with was artistic doodles... <Q> For approximately 30 seconds soft start duration, there are two approaches: analog timing or digital timing. <S> The analog approach is to charge a capacitor through a current source (so that the capacitor terminal voltage increases linearly), and use a comparator to detect when the capacitor reaches a threshold. <S> That’s not as good because the voltage rise is exponential instead of linear, and that makes it harder to control the timing. <S> (Note: the buzzer should not be powered through the resistor.) <S> Either way, the capacitor is the biggest source of error. <S> Capacitance will change over temperature and even change with applied voltage. <S> Typical variation may be on the order of 80%, which is isn’t great. <S> The digital approach is what @Hearth is suggesting, use a microcontroller to handle the timing. <S> If this were 1980 <S> I’d suggest using a 555 timer or one of the related intercil timer/counter chips, but in 2019 it’s cheaper and simpler to use a microcontroller. <S> When you’re selecting a microcontroller for this application, some features to look for are internal oscillator, internal LDO regulator, cost of development tools and licenses (free is often possible), and how easy is it to find a community that uses and supports that chip. <S> ARM such as STM32 are often a good choice, as are PIC / AVR. <A> A simple capacitor will not do what you want without some huge values that will also have other negative effects. <S> Realistically, for 30 seconds, even a 555 is not a great solution, as it will require some unusually large values, and as mentioned, isn't particularly cheap anymore. <S> The AVR or PIC microcontroller solution is the best case for a beginner to get up to speed quickly and have a workable product. <S> As you didn't specify timing tolerance, I'll ignore it. <S> Specs you will need to look for when choosing your microcontroller: Internal oscillator Operating voltage in the 2.5-3.6v range (larger is okay too) <S> Output current supplied by pins greater than the current required by the buzzer <S> That last requirement is a soft requirement - You can get away with tying two outputs together, although this is not good engineering practice if you choose to go into production. <A> Since the application requires it to be super light weight, not much more weight than the 3.5v coin battery itself, I used an analog approach. <S> The application in mind includes a fast and sudden spin force <S> so I was able to create a small wire switch that would close some magnetic points as soon as the spin started up, turning on the buzzer. <S> Manually turning off the buzzer was required so this solutions works perfectly. <S> Thanks for all the input because it helped me cross electronic timers off the list of possibilities.
A cheaper analog approach would be to charge a capacitor from a voltage source, through a resistor, then use a comparator. That will still require a timing source, but a cheap tuning fork watch crystal or even the microcontroller’s internal RC oscillator should be good enough.
Is negative resistance possible? I was reading Hayt Kemmerly Engineering Circuit Analysis Book ,(I tried others, but this is the most comprehensible to me.), And I came across this circuit. I understand the first two, but I don't understand how, in the 3rd circuit (c), there is negative voltage through the resistor \$R_3\$ , while the current through it from \$+\$ to \$-\$ is positive \$7A\$ . I don't understand how resistors can supply voltage. My guess is this is only a mathematical model, not real. Edit: The Answers are shuffled in this book. <Q> In a passive device, negative absolute resistance cannot exist. <S> However, negative differential resistance, where an increase in voltage leads to a decrease in current or vice versa, is observed in a number of rather common systems, such as neon signage and fluorescent lighting, as well as some more esoteric ones like tunnel diodes. <S> Below is a figure showing an I-V curve for a generic electrical discharge; notice the region between points D and G where the voltage decreases as the current increases. <S> This is the region in which both fluorescent lighting and neon signage normally operate. <S> ( image source ) <S> Negative absolute resistance <S> can exist over limited ranges by using active elements. <S> There's an op amp circuit commonly called a negative impedance converter <S> that simulates a negative resistance, capacitance, or inductance by using an op amp and feedback: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The two circuits above are equivalent, provided the op amp does not saturate. <A> My apologies to everyone, the original solution was wrong, I had the direction of the currents through R2 and R3 reversed. <S> Solution now edited. <S> If we Measure all voltages relative to the common junction of the 2 ohm resistor, R2, the 8A current source and the 3 ohm resistor then: Summing the currents at the node at the top right <S> gives the 37Vvoltage source supplying a current of 7A. (15-8) <S> At the - end of the 37V source the 2 ohm resistor has 6A flowing through it therefore the current through R2 is 1A to make up the 7A. <S> The top end of the 2 ohm resistor is at -12V (2 ohms X 6A). <S> and hence R2 = 12 <S> ohms (12V / 1A). <S> The node at the top right is at 45V as 15A flows through the 3 ohm resistor. <S> The other end of R3 (the <S> + end of the voltage source) is at 37-12 = <S> +25V <S> (the voltage across R2 and the 2 ohm resistor {-12V} + the 37V source) <S> The voltage across R3, Vs = <S> 20V (45-25). <S> -7A is flowing through R3 and hence R3 = -20/7 ohms, or approximately -2.86 ohms. <S> The more I look at it, the more I think the "-" in front of the 4 and 20 in the answers is just a dash (hyphen) not a minus sign. <A> To get the current and voltage values that were shown in the book. <S> The R3 needs to have negative resistance <S> \$R_3 = - \frac{20V}{7A <S> } = -2.857\Omega\$ <S> Because for the positive resistance we get this result: As you can see the result is not even a close to the book solution. <S> But if we use a "real" negative resistance (negative impedance converter) instead. <S> The simulation result will match the book solution: <A> This won't answer the question, but will show how you find that R3 must have negative resistance. <S> Here's the circuit diagram, with a couple of annotations: <S> First, from Ohm's law, we know that the voltage across the 2 ohm resistor is 12 V, and the voltage across the 3 ohm resistor is 45 V. <S> If you take KVL around the loop indicated with the orange arrow, you get $$ -45\ V + (-12\ V) + <S> 37\ V - v_x = 0$$ <S> This gives you \$v_x = <S> -20\ <S> V\$ . <S> Defining \$I_x\$ as the current through R3 (flowing left to right according to the passive sign convention), and using KCL at node <S> "A" you get $$ I_x + 8\ <S> A - 15\ <S> A = <S> 0$$ <S> From which, \$I_x =7\ A\$ . <S> You now have $$R_3 = \frac{v_x}{I_x} = <S> \frac{-20\ <S> V}{7\ A} = <S> -2.86\ <S> {\rm \Omega}$$ <S> It does not matter if you reverse the direction of \$v_x\$ . <S> If you do that (and also reverse the direction of \$I_x\$ to maintain the passive current convention) <S> you'll just get \$v_x=+20\ <S> V\$ and \$I_x=-7\ A\$ . <A> Ohmic negative resistance doesnt exist. <S> However there is negative resistance. <S> Zener diodes at their breakdown voltages have negative resistance since they create current by quantum tunneling.
You indeed need a negative resistance in the circuit.
How to carry quadrature encoder data using a single signal? A quadrature encoder can be thought of as two switches, usually sharing one of the terminals, i.e. we have COM(mon), A and B. Is there some simple way of carrying its output on a single wire? How about interfacing that to a single pin on an MCU? <Q> One solution is to convert the encoder's binary digital output to a quaternary (4-valued) digital signal. <S> It uses only 4 resistors. <S> The only reasonable values for such a circuit that yield uniformly spaced output voltages require adding two unused values to the signal, so the encoding is really senary (6-valued), with two unused values (0/5 and 5/5), and the output voltages can be 1/5, 2/5, 3/5 and 4/5 of the supply voltage. <S> For 5V supply, that yields simply 1V, 2V, 3V, 4V for various combinations of the two switches. <S> For 3.3V, you get 0.66V, 1.32V, 1.98V and 2.64V. <S> The "magical" resistor values are given on the schematic below. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> These values of course can be scaled, i.e. all of them can be multiplied by a constant. <S> For slow encoders like user input controls, one could multiply them by 10x, and add a 0.01uF capacitor from the output (marked "TO ADC") to ground to filter the signal. <S> The truth table is as follows, with toADC voltage given relative to 5V full-scale (VCC). <S> A B <S> | toADC | 8-bit <S> approx------+------+-------+-------------- <S> open open 2V <S> 102 <S> closed <S> open 1V 51 open closed <S> 4V <S> 205closed closed <S> 3V 154------+------+-------+-------------- <S> Neither 0V nor full-scale supply voltage would be present at the output. <S> Then one could use the MCU's ADC to capture such signal. <S> The ADC can have very low resolution - a dozen reliable levels would suffice. <S> This low resolution requirement sometimes lets you overclock the MCU's ADC, perhaps by a large factor like 10x. <S> The ADC's resolution will drop, but we don't need all of it anyway. <S> If the requirement isn't to save MCU pins, but only to save lines from the encoder, and there's no desire to use ADC inputs, then most MCUs could recover the signal using 3 digital input pins (ideally configured as Schmitt triggers), by using thermometer decoding, i.e. shifting the signal so that as its value progressively raises it'd cross the logic input thresholds of 3 input pins - this could be done perhaps with only resistors, or resistors and Schottky diodes, assuming that the signals change slowly. <S> Alternatively, if the MCU has comparators, then thermometer-to-binary decoding can be done using feedback: comparator B has the threshold set at 0.5VCC, and its output is scaled and added to the reference for comparator A, its threshold set at 0.3VCC + 0.4*B_comp_out. <A> You could also use an R-2R resistor ladder. <S> The advantage is that you need resistors of only one value. <S> Two 1R resistors in series make one 2R, or two 2R resistors in parallel make one 1R. simulate this circuit – <S> Schematic created using CircuitLab <A> Since this is a rotary encoder, let's call "switch 1" "Clock" and "switch 2" "Direction". <S> We need to sample Direction every time Clock changes level. <S> It would be nice to be able to use a Pin Change Interrupt (PCI) for this. <S> This can be achieved by modifying the circuit in Kuba's answer to ensure that a transition on Clock results in a voltage level transition that is properly registered according to the the microcontroller pin CMOS input levels, thus triggering a PCI which would use the ADC to measure voltage. <S> A transition on Direction would produce a smaller voltage change, measurable by the ADC but not enough to trigger PCI. <S> Here S1 is Clock and S2 is Direction. <S> Resistor values can be adjusted, as long as a transition on Clock swings between V > Vih and V <S> < Vil it will be registered as a digital level change and trigger the interrupt. <S> A measurement with the ADC will then detect the Direction level. <S> The pin needs to be configured both as a digital and analog input, without pullup. <S> Or the pin can be reconfigured on the fly between digital and analog input. <S> This methods does not require constantly running the ADC. <S> The micro can even be put into Sleep mode, and woken by the Pin Change Interrupt. <S> With higer value resistors, and a small filter cap, this would use very little power. <S> Please upvote Kuba's answer too!
You can use a dip resistor network to build the circuit.
Does placing additional capacitors in parallel with CJ7805 affect its performance? I'm not sure if this is a stupid question, but I'd still prefer to ask than regret it later. I'm designing a PCB with the CJ7805 which, like the LM7805, can supply a steady 5 volts. In this case, it's to power a Teensy 3.6. There's a recommendation to place a 100nF capacitor between the Vin pin of the Teensy and ground. I already have capacitors (100nF and 330nF) situated around the CJ7805 as specified by the datasheet . I was thinking of placing a capacitor for the Teensy, but I was wondering if that would affect the overall performance of the CJ7805 since it would change the net capacitance? Or maybe capacitors work differently when placed far apart on a PCB? Would I even need to place additional capacitors on the PCB when it's already being monitored by one capacitor? I've seen boards use soo many capacitors before, and I'm getting a little confused. I'm new to PCB design so I'm still trying to find and understand the proper ways of keeping noise to a minimum. Any help would be appreciated. Thank you! <Q> The regulator needs a bypass capacitor to be stable. <S> The Teensy needs one too. <S> If there is 1cm between regulator and Teensy, you may only need one. <S> If they are 1 meter apart, you definitely need two. <S> Since they are on same PCB just draw both <S> and you can choose later to leave one out. <A> The Datasheet is not comprehensive, but it's safe to guess the chip is a 7805 - alike linear regulator. <S> The 330nF and 100nF <S> (non-electrolytic) capacitors are mandatory to guarantee that the regulator is stable. <S> They should be Low ESR and must be placed as close to the regulator as possible. <S> But at the same time, there is no harm in adding extra capacitors. <S> Adding an electrolytic (10 uF would be fine) <S> capacitor as a bulk capacitor could help to decrease ripple on the input and output. <A> Teensy schematics show it already has a capacitor on the 5V input, so you don't need to add one on your board on the Teensy pins. <S> You should simply use the caps specified by your voltage regulator datasheet. <S> If your board has more 5V chips, they may also require their own decoupling. <A> When thousands of CMOS switches simultaneously switch x pF of charge between those switches and either Vdd or Vss inside any CMOS chip there is a ns current spike on the rails. <S> If that spike is supplied from a remote regulator, then using 0.5nH/mm for each track, you must consider the consequences of emissions and induced voltage spike from this inductance <S> V=LdI/dt. <S> Thus if you know dI/dt on supply IC noise ( which you can do with a 50mV shunt R) <S> and you have computed the supply trace or plane inductance, then you can correlate the improvement if any by putting a low ESR ceramic cap across the IC device such that dV/dt = <S> Ic/C +Ic*ESR <S> /dt(?) <S> Now you must estimate the ESR of all those 25~50 Ohm switches inside in parallel that are switching synchronously or use your lab result dI from a 50mV current Rsense to see if the new RC low pass filter will improve supply ripple and compare with Voltage source ESR of both caps. <S> I realize this is complex but not impossible. <S> Let's say it's for insurance and won't hurt or take the time to measure it someday for a simple IC , complex IC on a PCB , and make your own Rule of thumb with good 200MHz BW measurements. <S> Conclusion Always satisfy both the load and source capacitance criteria for optimal EMI emissions and susceptibility for conducted and radiated noise. <S> Use both caps unless the gap between CMOS and regulator > the size of the IC away. <S> BUT if your IC is actually a PCB (Teensy) with caps already , then this is redundant. <S> It may work without both, but logic works great until it fails ;) -- and this is a SNR supply fix to a probability sensitive result when it comes to noise induced spikes exceeding spec tolerances ...
The farther a capacitor is, the less effective it is.
What kind of connection to use between two circuit boards that move with respect to each other? I need to connect two circuit boards - one with a microcontroller, power circuitry, display, etc to another with several sensors on. The connection will require somewhere between 4-10 lines - 5V power, ground, I2C data and clock, possibly plus some interrupts. The circuit boards will be mounted at right angles with respect to each other. The crucial problem is that the board with the sensors on will need to move constantly with respect to the board with the microcontroller on. The degree of movement will be small - only a few centimetres, and it will be mainly in one axis (perpendicular to the sensor board, parallel to the microcontroller board). The movement will be irregular in nature - the movement will respond to the sensor input. The movement will be driven by a powerful servo, so the speed and acceleration of the movement may be quite jerky / fast. So what kind of connection can I use between the boards that will be reliable for this kind of movement? I've seen various kinds of flat flex connection used in consumer products. Is that kind of connection reliable over years of many millions of movements every day? What kind of connection would be used in more reliable applications like automotive or aerospace? <Q> Automotive has this problem for the ABS wheel sensors. <S> Their solution is a long lobe of thin, stranded copper wire in a thick rubber hose. <S> That way most of the force is applied to the rubber instead of the conductors. <S> I would cut down the actual conductors to the power ones and do all else through radio. <S> Or use a scheme where you can use the power wires for communication, too. <A> You have few options: <S> FPC, more specifically, Polyimide (Kapton) FPC. <S> It provides the flexibility and reliability for such applications and the connectors can be very small or none at all. <S> You can see such circuits in printers, phones and laptops. <S> Standard Ribbon cable - larger connectors, larger wires, cheaper to buy. <S> Connectors are usually crimped to the cable. <S> Discrete wires - this is the least reliable but most flexible option, as the movement will apply some forces on the solder joints. <A> If you need frequent motion you should specify FPC designed for dynamic applications. <S> The treatment of the copper is different from ordinary FPCs to keep it from cracking and there are constraints on the copper thickness and there are other constraints (single-sided is best, gentle radius that spreads out the flexing, and so on). <S> Think of consumer applications such as moving print heads or optical read heads that float in suspensions. <S> I've done this for a spacecraft instrument that needed frequent motion between a sensor and the electronics package- <S> pretty straightforward <S> once you understand the parameters. <A> All of the major cable manufacturers, and some specialty cable manufacturers have products specifically designed for such applications. <S> You need to consult with them with your specific requirements in hand. <S> I once had an application that needed to connect to a GPS receiver mounted on a moving platform, and I was able to find a super-flexible round cable with a silicone jacket that met my needs. <S> It wasn't cheap, however! <S> The cable is from Misumi Corp., and the connectors are a mix of Glenair (larger) and Hirose (smaller) circular twist-lock units. <S> The two upper connectors are on the moving platform, while the lower one is part of the fixed assembly. <S> The actual strain relief at either end of the part of the cable that flexes is handled by simple zip ties; once that's taken care of, the actual connectors used don't matter that much. <S> I needed 30 cm (12") of motion in a very rugged package — you probably don't need anything quite this elaborate. <A> One application where this is commonly used, admittedly for millimetres rather than centimetres of travel - is the connection to loudspeaker voice coils. <S> This demonstrates <S> a high cyclic life is possible.
Another variant on stranded copper wire is braided copper wire. As Janka suggests, you may adopt/adapt automotive practice (flexible tubes) for insulation and design to minimise the number of conductors (encoding over the power conductors, or "phantom power") Most good FPC makers will have a detailed design guide that can assist in your design.
I am measuring a 9W LED with a clamp on ammeter. Why does it only draw 7.62W? I bought a UNI T UT210E true rms multimeter. I measured the current of 9 W LED lamp. It shows 0.033A. For the power I get \$0.033 A \cdot 220 V = 7.26 W\$ only. But bulb was 9W. Why this difference comes? Actually am new to electric. I measured also a running 5hp water pump 3 phase with 240 VAC it gives I1 = 7.35 AI2 = 6.75 AI3 = 6.15 A <Q> That user manual (which you should link to in your question) shows the following: Figure 1. <S> Just because it's digital, doesn't mean it's accurate. <S> You are measuring at the bottom end of the range and if it were an analog meter you would be squinting at it to try to make out the reading. <S> Figure 2. <S> Reading position on an analog scale. <S> 0 <S> 0.4 <S> 0.8 1.2 1.6 2.0 <S> A|---------|---------|---------|---------|---------| ^-- 0.033 <S> A <S> The manual shows that that accuracy is only for readings > 5% of full scale, 100 mA on the 2 A range. <S> I have no idea what is meant by "<20 residue readings". <S> The manual doesn't make any claims about true RMS. <S> Figure 3. <S> The crest factor of an AC current waveform is the ratio of waveform's peak value to its rms value. <S> Source: Ametek . <S> Your LED lamp will probably have a high crest factor (peak current to RMS value due to the rectification action of the diodes. <S> The meter doesn't handle this well with a further 7% error possible. <S> Multiplying V RMS by I RMS <S> gives you the VA and not the watts. <S> To calculate the power consumed is more difficult and involves integration of the power curve. <S> Digital power meters sample the voltage and current waveform many times per cycle, multiply the instantaneous readings together to get the instantaneous power, sum them (integration) and average the readings to give the average power. <S> In short, it's the wrong meter for a true power calculation. <A> One more thing to consider - unless it has been measured with an accurate wattmeter, you don't know how much power your '9W' LED lamp actually draws. <S> LEDBenchmark measured the characteristics of many LED lamps with high quality test instruments. <S> Their Wattmeter has a claimed accuracy of 0.2% with voltage and current sample rate of 4800/sec. <S> Some example test results:- '9W' Bulb draws 7.6W at <S> 246V '9W' Warm White GU10 only draws 3.9W at 245V <S> "9 Watt, Operating Voltage 80-240 Volt AC" bulb only draws 2.2W at 123V! <A> Transistor's answer is correct, but I'm going to expand on it. <S> You must have had the meter set on 2 amps. <S> From the user manual, the accuracy is +/- <S> (3% + 10), where the 10 means "ten counts". <S> The resolution is 1 mA, so each count is 1 mA. <S> Then the accuracy at, for instance, 40 mA (.040 A) will be +/- <S> ((.04 <S> x .03) <S> + 10) mA, or basically <S> +/- <S> 10 ma. <S> So your reading of 33 ma could mean the real current could be as high as 43 mA, or as low as <S> 23 mA. 43 mA times 220 volts equals 9.46 W. 23 mA times 220 volts equals 5.06 watts. <S> It's also not unreasonable that your 220 VAC varies by as much as 5% (you didn't actually measure it, remember), so your real power could be in the range of 9.96 to 4.8 watts. <S> This does not include the crest factor problems which plague simple power measurements. <S> Finally, if you don't connect anything at all, the meter can have a reading of as much as 20 mA (that, I think, is what the "20 residue" means). <S> Since you are reading less than the rated minimum current (100 mA), you might have an error of as much as 20 ma, which means that your current could actually be as low as 13 mA, and the meter would still be reading within spec. <S> As transistor says, you need a different meter. <S> Specifically, you need a dedicated power meter which will sample both voltage and current at a fairly high rate, then multiply corresponding samples and do the math. <S> LED bulbs are not like incandescents, which behave like simple resistors. <S> They are non-linear devices which need special attention if you're going to try measuring them. <A> You can't determine if the meter is good or not, just with a calculation of supposed bulb power. <S> You should measure the current with an additional more accurate ammeter in series. <A> Many LED lamps are designed in such a way that they take more than rated power given part of each AC cycle, but then give some of the power back during other parts. <S> A clamp-type current meter without a voltage connection will have no way of distinguishing which way power is flowing during different parts of a cycle, and will thus have no way of subtracting the power which is returned to the mains from power which is taken from them. <S> Instead, both kinds of power will be added together. <S> Incidentally, transformers are rated in units of "VA" rather than "watts" for a reason similar to this: the amount of energy lost in a transformer will be proportional to the magnitude of the voltage times current regardless of which way the power is flowing. <S> If a transformer which is 90% efficient is used to power a device which takes a certain amount of energy from the mains during part of each cycle, and returns it all during the other half, the device being powered wouldn't use any energy, but the transformer itself would waste twice as much as if the device took all the energy it received and simply dissipated it without returning it.
Thus, clamp meters may be good for estimating energy dissipation in a transformer even if they're not good at estimating total energy consumption for some kinds of loads.
GND arriving to the wrong pin of this chip on my PCB I had originally done a PCB for another chip. All pins I'm using (1, 2, 3, 17, 18, 19, 20) are fortunately correct for the ATtiny4313 I'm now using, except GND that arrives on pin #4 of my PCB ... instead of pin #10! An option for my prototype PCB (I don't want to throw them away) would be to use a small wire between GND and pin #10. Question: just for learning purposes, would you see another clever option that wouldn't require a wire? Would something like doing digitalWrite(4, LOW); or analogWrite(4, 0); work and internally wire pin #4 to GND i.e. pin #10? Or wouldn't it work because the ATtiny won't boot first if no GND is connected? <Q> Just make sure that in your code you always leave pin 4 as an input (preferably with pull-up disabled to save power). <S> I wouldn't do this on a production board, but for a prototype it will be fine. <S> Regarding your last point, no, you can't just set the pin to be output low to make the connection, because all pins are high-z by default, so the chip likely won't turn on. <A> It is a hack, but as long as you are not sinking too much current you can use any GPIO pin on this chip as ground. <S> Really. <S> How does it work? <S> Every GPIO pin has a built in protection diode that connects it to ground... <S> Notice that if the voltage on the GPIO pin is lower than the internal ground bus on the chip, that diode will be forward biased and current will flow out the pin. <S> As long as the total voltage between the external Vcc connection and the GPIO-to-ground connection is high enough that the voltage inside the chip is enough for it to operate, then it will run. <S> Note that you have to account for the voltage drop across that diode (likely about 0.6V) and you can not exceed the rated current capacity of that diode (likely single digits of mA). <S> Wire up this circuit... <S> simulate this circuit – <S> Schematic created using CircuitLab ...and then program the chip fuses to run at 1Mhz and write a tiny test program to set pin P2 to output mode and toggle it at, say, 1Hz <S> and you should see blinkness. <S> Note that... <S> The power supply voltage must be high enough that the chip see a good internal voltage even with the drop across the protection 5V should be more than enough, especially with the chip running at 1Mhz where it only needs an internal voltage of 1.8V. <S> The power supply voltage must be high enough to be bigger than the forward voltage gap across the LED including the the protection diode drop. <S> Again 5V more than enough. <S> The LED should be set up to SOURCE though the chip rather than sink into it. <S> This limits the current flowing though the protect diode on the erstaz ground pin and instead uses the Vcc pin to drive the LED current. <S> NB: <S> For the haters out there who say that connecting the LED directly to the GPIO will make the world explode, I say try it. <S> If you are really worried about destroying humanity in the process, then connect the circuit to a variable supply and start with a low voltage and work your way up. <A> I would cut the track to pin 4, leaving that pin not connected to anytyhing, then connect pin 10 to any handy Ground point. <S> This would leave pin 4 free for other use, should you eventually need it, and would ensure that there would be no problem if your program sets it as a High output for any reason. <S> You might want to connect an 0.1 uF capacitor between pin 10 and 20 to ensure you have a good power supply bypass on the chip.
Given you likely aren't running at high speed (100+MHz), a simple "bodge wire" to connect pins 4 and 10 would work reasonably well.
Is there a more efficient alternative to pull down resistors? I am building a LED spinner circuit and I am at the point of optimizing it. The whole circuit itself only draws about 10-20mA max. I was today looking at this part of the circuit: Now as you can see, when my switch is at position 5, it turns the circuit off. But, now when my circuit is off, there is still current flowing through the pull down resistor, draining the battery. I know this is a very small current, but I was wondering if there was a way to make this switch so that it does not draw any current when switched off. Edit:I should have maybe put the whole circuit in. <Q> Note that the current is wasted regardless of whether the circuit is "on" or "off" — when it is "on" the voltage drop across R11 is only slightly less than when it is "off". <S> Using a PMOS transistor instead of <S> the PNP would mean that the pulldown resistor could be on the order of megohms, reducing the "leakage" current to microamps. <S> Or you could use a different strategy altogether, eliminating the off-state current entirely: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> Better still, combine both ideas and get minimal wasted current in the on-state, too: simulate this circuit <A> You could use a PMOS FET in place of Q1. <S> Then R11 could be 50k or 100k instead of 10k, reducing leakage in the off position. <S> You could use a separate "off" switch, or a special rotary switch with a special "off" position that disconnects VCC from the transistor altogether. <A> Place anodes to switch pins 1, 2, 4 <S> , cathodes tied together to "feed main circuit. <S> " <S> Disconnect pin 5 <S> so it becomes "true off. <S> " <S> The "feed main circuit" will be about 0.25v lower than Vcc. <A> You could replace all of the parts in this design except for the switch, battery, and LEDs with a microcontroller and it would have lower off power, lower running power, and likely even lower cost. <S> The off power savings are thanks to the fact that a modern microcontrollers (like AVR) can use as little as 0.1uA <S> while sleeping, and can wake on a change on one of their input pins. <S> You connect the micro directly to the power source and then attach the active switch contacts to IO pins. <S> You can enable internal pull-ups on these pins and then use a pin change interrupt to wake from low power sleep. <S> The "off" position need not be connected to any pin - the MCU knows that if none of the other pins are active for more than a certain timeout that the switch is in the off position and it goes to sleep until the switch is moved. <S> The pull-ups do not use any power when the switch is in the off position. <S> That is the basic idea. <S> Note also that you can directly drive the LEDs from the MCU pins using PWM. <S> This saves avoids the resistors and also gives you the opportunity to overdrive the LEDs for more brightness, which could make sense for a fidget spinner since you are likely going to have less than 100% duty cycle on those LEDs.
There are also refinements you can add like having the off switch attached to a pin with a pull-up so you can instantly detect it - but then the software disables the pull-up on that pin before going to sleep so again no power drain. You could use three Schottky rectifiers in place of the transistor and pull-down.
Can I measure torque accurately by an electrical or electronic method? I am a Btech. student from India. We have been given a project in which we have to design a micro-torsion testing device. We are planning to clamp the specimen and rotate one end by fixed angles using a stepper motor while measuring the torque for given deflection. The shear modulus and the angle/torque at which the specimen fails has to be found out. The most accurate way to do this seems to be using torque sensors but since our budget is small ( Rs. 10k - arnd 150 dollars) we cant use this. Another way we had in mind was to measure the current drawn by the motor and find out the corresponding torque. But this seems to be prone to errors. Can someone help me out with this ? <Q> Fix a lever of a precise known length to the specimen and add calibrated masses. <S> Measure the deflection with a dial gauge. <S> Perpendicularity might be a concern and could be addressed with geometry... <S> However, the error that introduces is probably small compared to other sources. <S> The sources of errors may well be worth checking. <S> I know that for the dynamometers we used on engines, pumps etc there was no correction but the beam was usually corrected back to a given calibrated zero point. <A> I think measuring the motor current is not a bad idea. <S> I wouldn't use, however, a strong stepper motor directly but a weaker one in combination with a reduction gear. <S> That way you get many rotations at the motor axis even if the angle of torsion at the specimen is only very small. <S> So the current measurement can be averaged over several turns. <S> I assume that accuracy will then be by far enough for your application (<<20%). <S> Also measuring the torsion angle will be much easier if done at the motor axis (before the reductio gear); e.g. using an incremental angle encoder, or, if a stepper motor is used, simply by counting the steps. <A> Make a torsional pendulum and measure its frequency against a known intertia. <S> Orient the test piece vertically, clamp the top end, add a known disk inertia to the bottom end. <S> Now you have a torsional pendulum. <S> Twist the bottom disk a small angle and measure the frequency of torsional oscillation. <S> Just add a contrasting dot to the disk if using a reflective sensor. <S> Or a pair of holes near the disk's edge (for symmetry) if using a slotted optical sensor. <S> Hope this helps! <A> Fixing your sample to the end of a rod that will plasticaly deform within the range of torques you are interested in measuring, then affixing strain guages to the rod, would allow the torque to be measured electronically. <S> The resistance of the strain guage(s) would have to be measured and calibrated against some known torques (which could be applied as suggested in other answers, with a beam and a known mass or force). <A> I think it might be easier to apply a known torque and measure the deflection.
Applying a known torque can be as simple as using your stepper motor to control the position of a weight on a lever, and you can use, for example, optical interferometry to measure the deflection accurately. Optical measurement is easy, and the sensors are inexpensive.
Inductor : how to choose toroidal ferrite core material I want a 2.2mH toroid inductor working at 200kHz switching frequency able to keep 2A saturated current. I found a ferrite core provider with these possibilities :- Core FT50-61 (12.7x7.15x4.9) with AL=69 needs 178.6 turns- Core FT50-75 (12.7x7.15x4.9) with AL=4200 needs 22.9 turns So which core should I choose to make a 2.2mH toroidal inductor 2A saturated current? I suppose that the AL parameter is the key. Can you explain me why? <Q> With an \$A_L\$ of 69 nH/turn \$^2\$ , to get 2200 \$\mu <S> H\$ <S> requires 178.6 turns i.e. turns squared x 69 <S> nH = 2201 <S> \$\mu <S> H\$ . <S> With an \$A_L\$ of 4200 <S> nH/turn \$^2\$ , to get 2200 \$\mu <S> H\$ <S> requires 22.9 turns i.e. turns squared x 4200 nH = 2203 <S> \$\mu <S> H\$ . <S> Both achieve the same inductance but the scenario with the fewest turns will saturate the core to a much greater extent. <S> This is because of the relationship between B (flux density) and H (magnetic field strength). <S> The higher the ratio between B and H <S> the higher is the core permeability. <S> When \$A_L\$ is a low value the permeability is naturally lower so, for a given H field, B is smaller and hence core saturation problems are not as big. <S> Notice that 178.6 turns is 7.8x as many turns as 22.9 BUT, notice that 4200 nH/turn \$^2\$ is nearly 61 times higher than 69 nH/turn \$^2\$ . <S> This makes all the difference. <S> On the other hand, more turns means more \$I^2R\$ <S> loss <S> so you need to make choices and optimize what you want/need. <A> Actually, everything matters. <S> It's why I just buy coils... <S> The two biggest things that matter for the specifications you've given are \$A_L\$ , which determines the inductance per turns \$^2\$ , and the point at which the core saturates. <S> If I'm remembering all this stuff correctly, the core will saturate at a certain magnetic field strength, depending on the material. <S> The field strength depends on the coil current times turns, divided by the average magnetic path through the core. <S> So lots of turns on an itty-bitty core will get you the inductance you want, but the core may saturate. <S> You need to choose a core large enough to get you the inductance you want without saturating the core. <S> While you're stressing out over that, you also need to consider that the core is also lossy, both because it's (possibly) conductive and because reversals in the magnetic field dissipate energy in the core (if you look at the B-H curve of a material, the area inside that curve is proportional to the energy lost each cycle). <S> And, finally, the wire you use is also lossy, and you can only go so big (and in high-frequency applications, it works better to use individually insulated multiple strands, to get a sorta-litz wire effect). <S> If you check out the Fair-rite website, they have a ton of instructional videos. <S> Your cores are probably Fair-rite, or Fair-rite knockoffs, so the videos should be helpful even down to the material designations. <A> As Tim said, everything matters! <S> There are even a few points that the other two answers have missed. <S> Permeability is not a constant. <S> Switching frequency, DC bias, and even temperature affect the permeability of a core. <S> If you're using a company such as Mag Inc, Ferrox, TDK EPCOS or the like, they will have plenty of graphs showing you these property curves. <S> Not all materials are created equally. <S> Some materials have an excellent DC bias and you can put tons of amps through them and only lose ~20% of your initial perm. <S> Others will fall off like a rock! <S> You may have 2.2mH with 23 turns at first, but when you lose 20-30% of that perm (look at the graphs, some cores can lose over 50-60%!) <S> you will not have enough inductance. <S> In this instance, you may do 50 turns for 4.4mH <S> so that when you get to full DC bias you are still above the 2.2mH goal. <S> At a maximum of 2A, DC Bias probably won't affect you much, but it is something to look out for. <S> I think your primary concern is going to be switching frequency, so just make sure the Permeability Vs. <S> Frequency graph from the core datasheet is fairly flat. <S> You always want to calculate your required inductance at the worst case scenario (full 2A, switching frequency, a little margin). <S> Do yourself a favor and check out <S> Magnectics Inc. I use their cores all the time and they have step by step guides for designing inductors and such.
Cores with the same dimensions but having lower permeability (either due to material differences or gapping) will naturally saturate to a lesser extent for the same winding inductance.
Determining when a serial connection is opened I have a PIC24FJ MCU with an FT232RL USB to UART controller. My design has end users opening a serial connection to my MCU to set parameters and get debug information. My question is: How can I tell when the end user has opened screen / PuTTY /etc to my device's COM port? From what I have read, the DTR signal is not a good way to go about this as the OS decides when to raise DTR. My use case is that I would like to display a banner with help commands as soon as the COM port is opened. <Q> You are correct that DTR can be overridden by OS, but in practice, it is actually a pretty good indicator. <S> And in Linux, it is pretty hard to disable [0]. <S> If you have USB support in your chip, you can implement CDC class serial device natively. <S> This usually does not require any drivers, and gives you full information, including when the port is open. <S> Once very nice property in using on-chip CDC implementation (as opposed to external FTDI chip), is that you can ignore baudrate completely -- so user will be able to talk to your device no matter which speed they choose in putty settings. <S> [0] https://unix.stackexchange.com/questions/446088/how-to-prevent-dtr-on-open-for-cdc-acm <A> I don't think you can rely on the handshaking lines to signal that a user is connected to a serial port. <S> I don't think I've ever made a serial cable with more than RX, TX and GND. <S> You will probably have to wait for the user to hit a key, then respond to that with your banner. <A> All regular terminal software will raise DTR. <S> It's true that the OS (and even application software) can manipulate the DTR signal, but unless there is some vulnerability exposed by having them connected while DTR is not asserted there will be no motivation <S> to de-assert DTR, an no problem caused by that ability. <S> Many terminal softwares can briefly disable DTR by pressing a hot-key (traditionally alt-h for "hang-up")
Use DTR that's what it's for!
CAN BUS electrical characteristics I know that CAN bus has two states. Dominant states where CANH goes 3.5V and CANL goes to 1.5V and in recessive state, both CANH and CANL goes to 2.5V. My question is, how much current does it draw in dominant and recessive states? <Q> According to this article from Texas Instruments, a typical CAN bus driver output stage will look something like this: <S> Thus, we can expect the bus to draw significantly more current in the dominant state. <S> Unless we know the termination (load) resistance, differential voltage (CANH-CANL), and maximum output current of the drive IC, we can't determine the exact current in the dominant state. <S> In the recessive state, the current will be nearly zero since nothing is actively driving the bus. <A> the receiver interface circuits will be monitoring the bus, either to detect collisions during transmitting, or for external data packets; this requires an analog comparator with about 20 nanosecond delay time, with stable hysteresis, and with input voltage-dividers on both of Vin+ and <S> Vin- pins of the comparator, to level shift DOWN into 0/5volt rails from a CAN bus that during one-sided failures (Bus shorts) can be up to 40 volts. <S> For 10 nanosecond delay time, allowing another 10nanoseconds for the analog comparator, and assuming 10pF ESD and analog-differential-pair gate-capacitrances, you need 1Kohm resistance in the input voltage-divider, with 9Kohm also used. <S> The receiver circuit looks about like this simulate this circuit – <S> Schematic created using CircuitLab <A> You may (1) find the information in the datasheet. <S> Some give the information directly, for example the TCAN330 from TI <S> Other datasheet give it indirectly, for example <S> the SN65HVD23x from TI <S> Here, the current required for the IC itself is given by \$I_{CC}\$ Supply current. <S> Note the "No load" condition. <S> Typically, the bus load is 60 Ω. <S> In dominant state the Differential output voltage in dominant state <S> \$V_{OD(D)}\$ <S> is typically 2V and max 3V. <S> So, with a bus load of 60 Ω, there is a typical current drawn of 2V/60 Ω = 33 mA, and max 50 <S> mA. <S> (1) Unfortunately, not all datasheet are well documented.
Given the internal 2.5 volt biasing, and the voltage-divider inputs, the CAN bus is pulled toward 2.5 volt centering.
Can I use this analog switch for CAN bus signals? In the deutsch 9 pin obd interface for heavy trucks, some manufacturers use the pin F,G for J1939 protocol and some use it for J1708 protocol. I am trying to use a NLAS4684 analog switch with dual SPDT configuration to get the desired output based on the pins usage by manufacturers. I am worried about connecting an Analog switch next to the vehicle BUS pin. Is it safe to connect this switch to the diagnostic port to route signals? The Rd_on for this switch is low and the current per channel is around 300mA. <Q> It is apparently a UART-based bus using RS-485 transceivers. <S> Meaning you can't connect it to a CAN transceiver in the first place. <S> J1939 on the other hand is a standardized CAN-bus protocol, using CAN. <S> So, no you cannot use analog switches or dip-switches etc because you don't have two CAN buses. <S> Instead connect the J1708 through a RS-485 to the UART peripheral of the MCU, completely separated from CAN. <A> The datasheet you linked to shows the permitted voltage range on the analogue pins of this switch IC is from 0.5 V below GND to 0.5 V above Vcc, and the maximum recommended Vcc is 5.5 V. <S> This does not look adequate for the expected common mode voltage range of a CAN bus. <S> If you want to be on the safe side the most obvious suggestion would be to use relays, since your application doesn't need frequent switching. <S> This answer points out some issues with using relays to switch a live CAN bus but that doesn't sound like a problem for your application where you want to set up the configuration of a test system (presumably) before you connect it to the vehicle. <S> If you really want the switching to be solid-state you'll need to find out the maximum expected voltages on both bus types and find an appropriately specified part. <S> I suggest you get this information from the standards for each bus type and/or from the datasheets for transceivers for each bus type. <A> So... you are willing to risk signal degradation and a chance for voltage mismatch if switch is in the wrong position (with damaged electronics as a result), all for dubious convenience of having one connector and a switch instead of two connectors? <S> The components for this are different for two protocols, so you cannot put them before the switch. <S> And after the switch they don't make much sense. <S> My advice would be to use two clearly marked connectors, each routed to its own transceiver and equipped with suitable protection. <S> If nothing else, it would make PCB routing easier and traces shorter.
I have never used J1708 but according to wikipedia , J1708 is not a CAN bus . Also, the diagnostic equipment usually has heavy TVS protection on inputs.
How can I float a pin that otherwise should be low? I want to enable/disable a TI TPS54302 buck regulator with a microcontroller. The enable (EN) pin should float to enable the device, or be tied low to disable it. I currently have the EN pin connected to a GPIO pin on the MCU. At startup, before the pin state can be set low, it will sometimes already be floating, so the regulator operates for a brief time before the pin is intentionally set low. I'd like to add an external pull-down resistor to ensure the regulator stays off until it is supposed to be on, but that would prevent floating the pin. I presume I could simply pull the pin high (instead of floating it) and achieve the desired result. The TPS54302 datasheet says: The EN pin has an internal pullup-current source which allows the user to float the EN pin to enable the device. If an application requires control of the EN pin, use open-drain or open-collector output logic to interface with the pin. If I tie the EN pin to ground with a 10kΩ resistor, and pull the MCU pin high when I want the regulator to operate, is that a viable solution? I'm not concerned about small (≥1mA) constant current usage as this is a line-powered device. <Q> Yes, you're fine with your approach. <S> The EN pull up current is less than 2uA, so a 10K will allow you to remain below the threshold, and the microcontroller output will be able to pull it up to 5V (or at least greater than the threshold. <S> Just stay below 7V on the enable pin and you'll be fine.) <S> As you pointed out an open drain output on your micro doesn't really solve the problem of keeping the device disabled while the uC boots up. <S> Note datasheet specs below: <S> [EDIT for more clarity:] <S> So you can't pull the enable to Vin or drive it with an open collector with a pull-up tied to Vin. <S> You don't HAVE to just float the pin or pull it low. <S> Note the UVLO level modification circuit in the datasheet: <A> Here is a schematic that will achieve the stated goal: simulate this circuit – <S> Schematic created using CircuitLab <S> When the GPIO is floating (i.e. micro-controller is off / booting) <S> the transistor is turned on by R1 and EN is driven low. <S> When the GPIO is asserted low by the microcontroller, it turns off the transistor, and EN is floated. <S> I cannot conceive of a simpler way to satisfy the requirement. <S> The only two states experienced by the EN pin are low and floating. <S> A more conservative design would include a 100 Ohm series resistor between GPIO and the transistor gate. <A> An open drain buffer (like the NC7WZ07 ) would work. <S> When you pull the buffer high, the buffer goes to high impedance and enables the TPS54302. <S> To keep the buffer from operating during startup the pullup can be used before the buffer. <S> simulate this circuit – <S> Schematic created using CircuitLab Source: https://www.onsemi.com/pub/Collateral/NC7WZ07-D.PDF
Not necessary, but if GPIO is asserted high, it also turns on the transistor and drives EN low. The reason for the recommendation for the open drain approach is that this part allows Vin up to 28V, but the EN pin is only rated to 7V abs max. Nothing prohibits driving the pin from an open collector only, an open collector pulled up to (e.g.) 3.3V or 5V, or a push-pull output in the right voltage range.
Are there any free simulators for SystemVerilog? Are there any free simulators available for a hardware design coded in SystemVerilog? In particular, I need SystemVerilog's dynamic arrays. <Q> All the versions of Modelsim: Student Edition (SE), the FPGA simulation tools released with Intel Quartus (IE), MicroSemi Libero (ME), and Xilinx Vivado (XE), support all SystemVerilog constructs with the exception of randomize, covergroup, and assertions. <A> Select <S> Standard and version <S> 16.1 <S> if you don't want to download 6GB. <S> Available for Linux and Windows . <S> You might need to create an account to be able to download the file. <S> You can create one for free. <A> Xilinx Vivado <S> 2020.1 Supports <S> UVM 1.2 and many features of Systemverliog. <S> It supports the same in WebPack (Freeware) Version. <S> I developed a tool "tbengy" to generate a UVM TB and Makefile. <S> You can read the instructions on https://github.com/prasadp4009/tbengy
Try the free Modelsim-Intel FPGA edition : https://www.intel.com/content/www/us/en/software/programmable/quartus-prime/model-sim.html Works great and is closer from what you can find in professional environments.
Current booster for relay I have a DC power source which can provide up to 5V, 50mA (LabJack maximum output current, voltage bus-powered) and I want to power my relay (SDT-S-105LMR2,000) with 5V, 100mA.I have an option to use NPN transistor with a DC current gain of approximately 2, but the 0.7V voltage drop is not that nice. First question - Is the circuit logic correct (more attention on the added current booster)? Second question - Because I am building 16 channel relay board, is there more convenient option for boosting the current for each of the relay (preferably less components)? Third question - Will the LabJack supplied power will be a good enough source for 16 relays (working at different times)? simulate this circuit – Schematic created using CircuitLab <Q> You can't in the way you try it. <S> Your source is 250mW, your relay requires 500mW. <S> You have not got enough energy. <S> A transistor can amplify current but not out of thin air. <S> It needs to come from somewhere. <S> It is often much lower then the 'attack' * or "pick" current. <S> In that cause you might store energy in a capacitor and use that to get the relay to 'start' en then your 50mA may be enough to 'hold' it. <S> *I can't remember if that is the right term... <A> A slight modification to your design. <S> A mosfet can have a lower voltage drop than a BJT due to small Rds-on (10mΩ is common). <S> It's not <S> about transistor current gain; you need to operate the transistor in saturation mode. <S> This is much easier to do with a mosfet. <S> No need for the upper transistor Q2. <S> I added C1 as a current reservoir, like @Oldfart alludes to, to get the pick current. <S> If you don't have a 1000uF cap laying around you can parallel up a few smaller values to get you close. <S> All solenoid/relays have a pick current and a hold current. <S> Although the manufacture may not tell you both. <S> The hold current can easily be as low as 10% to 20% of the pick current. <S> The pick current is the amount of current needed to get the relay to move its armature. <S> R1 can be anything from 10Ω to a few kΩ. simulate this circuit – <S> Schematic created using CircuitLab <A> Q2 is not needed and R1 could be larger, say 10k. <S> I suggest you check out the ULNxxxx series of relay drivers. <S> There you have say 8 drivers in one DIL package. <S> Anyway, you will need an extra power source. <A> Thank you for the answers, it really made me think <S> and I find solution of @Aaron and @Oldfart <S> well fitting for one relay. <S> Only, what I find disturbing is, when I have 16 charging capacitors by only 5V, 50mA power source, which means that the current will be divided. <S> My conclusion is to use extra power source with suitable power rating as @AndersG said (I did not want to use ULNxxxx series of relay drivers, because I need I2C or SPI).
What you can try is to see what the relay hold current is.
Can a single phase SMPS be converted to run from three phase? If you have a switch-mode power supply that runs off 240 VAC single phase, is it possible to convert it so that it runs off 3P+N (415V phase-to-phase) by only changing the bridge rectifier? I have been researching 3ph rectifier designs but most of them seem to focus on producing rectified output without using the neutral input, which would produce a voltage that is probably too high to feed into an SMPS designed for single-phase operation. Is there a 3ph rectifier design that can be connected to a 3P+N supply, that will produce the same peak voltage as the single phase rectifier, and will allow the SMPS load to be shared equally amongst the three phases? If so, what is its behaviour if one of the phases or the neutral is lost? <Q> This theoretical discussion assumes we are talking about a well designed, good quality SMPS, not a cheap no-name one from Ebay! <S> If the Phase to Phase voltage is 440V RMS <S> then the Phase to Neutral will be 253V RMS and this is what many domestic single phase supplies deliver, although nowadays it's specified as a nominal 230V. <S> So a single phase and neutral is essentially what you have now as the input to the SMPS. <S> Consider a full wave bridge rectifier, this is almost certainly what your SMPS currently has at its input. <S> This would deliver about 358V DC if supplied by 253V AC. <S> Now consider adding a second (and third) bridge rectifier to the other two phases and commoning their DC outputs. <S> The DC output voltage will remain the same, but will contain a lot less ripple. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> So, at least in theory, you could substitute a 3-phase full wave bridge rectifier for the existing single phase one. <S> BUT it doesn't end there. <S> The SMPS will have common mode filters and class-Y capacitors on the input before the rectifier and these should be duplicated on all 3 phases as well. <S> Getting all these additional components safely into the original case would prove quite a challenge. <S> Smoothed rectified 3-phase <S> is nominally the same as single phase (with less ripple), but would tend to be slightly higher as the extra phases will "fill in" the dips. <S> Again a good quality SMPS would have components following the rectifier of adequate rating so they should (no guarantees) be able to cope with the potentially higher DC voltage, but you would need to check very carefully first. <S> All in all it doesn't seem worth the effort. <S> I can't find any references to this particular circuit online, the nearest are either 3 phase half wave rectifiers or circuits involving transformers with 3 centre tapped secondaries with the centre taps commoned giving 6 phases which get half wave rectified. <S> There appears almost always a transformer involved and nobody is advocating direct rectification of a raw 3-phase supply. <S> I am sure that there are plenty of power engineers out there who will tell you exactly why you shouldn't do this! <A> Is there a 3ph rectifier design that can be connected to a 3P+N supply, that will produce the same peak voltage as the single phase rectifier, and will allow the SMPS load to be shared equally amongst the three phases? <S> Sure, just use half wave rectification (i.e. a single diode) from each of the phase voltages and you get the same peak voltage after this rectification with respect to neutral. <S> This has the advantage over a conventional single phase full bridge rectifier in that ripple voltage is slightly less. <S> If so, what is its behaviour if one of the phases or the neutral is lost? <S> Now that's harder to predict without knowing the SMPS design; with one phase lost the ripple voltage will be "good" for two thirds of the overall period then significantly worse for one-third (the third that looses its phase voltage) of the overall period. <S> If youn ge6 hold of a free sim tool, it can be easy to see. <A> The answer to the question as stated is no. <S> The three-phase load would be balanced reasonably well. <S> The SMPS could be designed to tolerate the loss of a phase. <S> A neutral connection would not be required. <S> There are many SMPS on the market that accept a range of 100 Vac or so to 250 Vac or so. <S> Many if not most of those, including those that don't advertise the fact, also accept DC. <S> SMPS that accept voltages above 250 V are likely designed for specific industrial uses.
A SMPS could be designed to accept a wide range of unfiltered, rectified single or three-phase voltages.
Designing a PCB with variable-sized component alternatives I'd like to get a little PCB manufactured with a basic circuit that I can populate with components of various sizes. E.g. have a place where any of a range of possible capacitor sizes could go. So I would have a chain of leads, like this, for example, where depending on size, I would use a closer or further-apart lead for the component in question: ---O-O-O O--- I'm finding it difficult to convince KiCad or Fritzing to let me do that. They seem to think that any component has exacly one size, and I can't find an "empty lead" part to add either. Any advice how to best go about this? <Q> while I remember there being a few more flexible footprints (especially, the "hand soldering" SMD footprints), this simply screams "you want something oddly specific, so, go and design your own footprint". <S> It's surprisingly easy! <S> If you want, you can also do the nice FOSS thing and then upstream your new footprint to the kicad-library. <A> I make double and triple footprints in KiCAD quite regularly and it works out quite well. <S> You need to be careful about production constraints <S> (avoid holes in SMD pads, etc), but KiCAD does allow you to do what you want. <S> There are several ways to go about it: <S> Superpose all the footprints one on top of the other. <S> You need to ignore the specific DRC errors about overlapping courtyards, and sometimes place holes very precisely in the same location. <S> In your example, you would give the same PAD index number for a group of PADs. <S> Once you do your PCB layout, the router will request you to add the traces between the PADs that have the same indexes. <S> So you still have some freedom on how you want to route this. <S> Here is an example where I superpose two different rectifying bridges. <S> The center is slightly different so that the PADs do not superpose: <S> Here is an example where I can put one out of two fuse holders using a single footprint: <S> And another case where I superposed two RJ45 connectors - one SMD and one THT - the "holes" are superposed: <A> I generally would not suggest combining multiple components into one footprint/symbol. <S> This will hide a lot of information from a reader of the schematic. <S> It is much better to have one symbol per possible alternative in the schematic (connected in parallel). <S> This allows you to create a BOM for every variation as required. <S> (This could lead to false positive ERC errors/warnings depending on how the symbols are designed. <S> Sadly there is no "do not show this one message in the future" option in KiCad.) <S> You can then overlap the footprints as needed in pcbnew, but make sure not to overlap pads as this can create trouble with soldering. <S> There simply is a tradeoff between component selection flexibility and board size. <S> (This restriction also applies when making a single footprint for all possible components.) <S> Another benefit to having <S> one symbol/footprint per possible combination is that you are still get one centroid per possible part in the pos file which is required for automated manufacturing. <S> Meaning combining different options within a footprint will likely restrict your options for automated manufacturing. <S> (You never know if you might not in some future want to go this route for some of your projects.) <A> You could do this in Fritzing using a 1 x <S> n <S> pin header component to add the n extra holes <S> (a 1-pin header is your 'empty lead component'). <S> Place the header on the PCB layout first, place the copper tracks as you want them, then go back to the schematic view and tidy up there. <S> You'll probably want to hide the silkscreen for the header. <S> This won't leave you with a fully professional looking schematic, but if you needed that you wouldn't be using Fritzing. <S> It should achieve what you need on the PCB though.
Create a specific footprint.
Adding different voltage battery to same circuit I have 9 volt battery to get 5 volt using step down converter.I want to enable the step down using a timer IC max input voltage 6 volt. I will use 3.7 v battery for this. So my question is: Can I join the ground or negative pole of batteries in the same PCB or circuit, I will have problem with signal voltage to enable step down and power off the timer from the MCU 5 volts ???????? <Q> You should have no problem interconnecting all the grounds. <S> As a matter of fact since you want a common "off" signal from the uC connecting the grounds would be required anyway. <A> There's no problem with the circuit as drawn. <S> Remember that voltages are always relative. <S> Tying the negative poles of the two batteries together only ensures that those poles are at the same voltage, which I'm going to call "0V", just because I can. <S> You can call it whatever you want, but I'm going to use zero for simplicity. <S> The "3.7V" only indicates that the positive pole of the battery is 3.7V above the negative one, but doesn't say anything beyond that. <S> So, the timer will still see 3.7V across its power supply pin and ground pin, the step-down circuit will see 9V across its pins, and the MCU will see 5V across its pins. <A> You need a self-approving reply relying on incomplete schematics? <S> Try, it might work. <S> You need a detailed proper answer? <S> Depends on the EN input of the regulator. <S> Some of them are using an open drain solution, with voltage capabilities between supply of the buck and some logic level. <S> In that case, provide you are in the specsheet boundaries, is OK connecting the ground. <S> BUT different circuits than an open drain requires analysis: does the EN input is a logic input with well determined maximum AND minimum LOGIC levels? <S> Are there bypass diodes in the chip? <S> But for this, we need to know the architecture of the EN input with respect the whole buck regulator. <S> And with that schematic, we don't.
Yes, you can join the ground of different voltage supplies together. You might discover are incompatible logic signalling and you need to put something more than just a wire to connect the EN input, ranging from a logic level converter down to just a resistor.
PCB design for 50MHz I'm designing a rather big PCB which will have 64 nokia 5110 LCDs arranged in an 8x8 matrix and will all be controlled by a single microcontroller. Each screen will have an 8-bit shift register as a buffer, to compensate for the mismatch between the microcontrollers ~50MHz max SPI clock and the LCDs 4MHz. Therefore, there will be 64 shift registers in series at 50MHz on an approximately 30cm x 30cm PCB. My question is then, what are the problems I should account for in such a design? Assuming I place each screens shift register behind it, I would have really long (approximately 250cm) zig-zagging traces for the clock and data lines. This seems pretty ridiculous, but at the same time I've had no problem working with breadboard and jumper wires at close to those frequencies, the only difference being the wire length. I could also clump all the shift registers near the microcontroller so the longest lines will be the ones leading into the screens at the lower frequency, but the traces would still be pretty long. I'd appreciate any help and suggestions to avoid having many failed PCB designs. <Q> At 50MHz, the wavelength in the PCB is about 20ns <S> x 15cm/ns = 300cm. <S> The time of travel in a wire of 30cm is 2ns. <S> If you want to avoid to consider you traces as transmission lines, you must keep them under 1/20th of the wavelength, which is 15cm. <S> And at that length you need to account for the 1ns signal delay when checking hold and checkup times. <S> You also need to avoid crosstalk between parallel wires by keeping them 3x their width apart. <A> I would route the high frequency signals as controlled impedance. <S> I would also add a termination resistor to each signal to avoid reflections. <S> You can either user series termination at the source of each signal or use parallel termination at the destination of each signal. <S> Also, the 50MHz signal period is much less likely to be a problem than the rise/fall time of the signals. <S> If you can keep the rise/fall times slower it will help avoid reflections. <S> Assuming your shift registers only shift on a clock edge, then the most important signal to consider will be the clock signal on the shift registers. <S> Even if you get reflections on the data lines it probably won't matter as long as the signals settle to their final value before the next clock edge. <S> But if you get a reflection on the clock then your shift register may see two or three clocks and all of your data will be shifted over. <A> My question is then, what are the problems I should account for in such a design? <S> In my designs with an STM32F it can be difficult to reach 50MHz. <S> The first problem will be GPIO capacitance, it may be problematic with the capacitance of the GPIO's themselves even with short runs, the gate capacitance of the receiving port must also be taken into consideration. <S> The other problem is a very fast microprocessor if bit banging. <S> If not bit-banging then you will need to use SPI hardware ports. <S> If the design works on a bread board, the parasitics are much smaller on a PCB, so theoretically you should have minimal problems if you move it over to a PCB. <S> Use a 4 layer design with the clock on the top, not running through vias and ground on the layer directly below.
I have a hard time believing that you can achieve 50MHz on a breadboard, but it depends on the setup (I'd like to see a scope trace and pic of the setup).
Why exactly does maximum power transfer happen at 50 ohms (matched impedance)? As the title mentioned - I am not sure why exactly the maximum power delivered to the load will be max when R_L is 50 ohms. If I guess why, it's because if the resistance was greater than 50 ohm then the current will be lower, but if it was less than 50 ohm (e.g. 25 ohm), then the constant 50 ohm resistor would deliver the majority of the power instead of going to the load. Why does the maximum power transfer happen at 50 ohms? <Q> The power delivered to the load is from the Joule heating effect: <S> \begin{equation}P=\dfrac{\Big(\dfrac{R_L}{R_L + 50}G\,V_{IN}\Big)^{2}}{R_L}\end{equation} <S> So from differential calculus we know that a function reaches its maximum or minimum value when we differentiate and equal it to zero. <S> In this case we'll have a maximum value, thus: \begin{equation}\dfrac{dP}{dR_L} = <S> (G\,V_{IN})^2 \dfrac{((R_L+50)^2 - 2\cdot R_L\cdot(R_L <S> + 50))}{(R_L+50)^4}\end{equation} <S> Finally by making \$ \dfrac{dP}{dR_L <S> } = 0 \$ <S> we only need to care about the numerator since the denominator does not make the equation be equal to zero from any real value. <S> Thus we have: \begin{equation}(R_L+50)^2 - 2\cdot R_L\cdot(R_L + 50) <S> = 0 <S> \implies \, (R_L)^{2} = <S> 2500 <S> \implies R_L = 50\,\Omega\end{equation} <A> When you lower the load resistance, you are decreasing its share of the voltage (and thus power); but you are increasing the total current (and thus power). <S> So which direction power goes, up or down, depends on which effect is stronger. <S> And as it happens, they cross over at 50 ohms (that is, when load resistance is equal to source resistance.) <A> As a rule, maximum power transfer for the load with a series and load resistor always happens when the resistances are equal. <S> Use this rule as a shortcut when designing anything from antennas or transmission lines. <A> Another way to solve this, that doesn't involve actually doing any differentation. <S> Let <S> \$R_S\$ be the source resistance, \$R_L\$ <S> be the load resistance, \$V_{RL}\$ <S> be the voltage <S> accross the load, \$V_{RS}\$ <S> , be the voltage across the source resistance, <S> \$P_{RL}\$ be the power delivered to the load resistance and \$I\$ be the current. <S> \$G\,V_{IN}\$ and \$R_S\$ are outside of our control. <S> We control \$R_L\$ and out goal is to maximise \$P_{RL}\$ . <S> $$G\,V_{IN} = <S> V_{RS} + V_{RL}$$ <S> $$P_{RL} = IV_{RL} = <S> \frac{V_{RS}}{R_S}{V_{RL}}= \frac{V_{RS}V_{RL}}{R_S}$$ <S> So to maximise the power to the load we need to maximise \$V_{RS}V_{RL}\$ , since this is a quadratic it has exactly one turning point, and by symmetry that turning point must be at \$V_{RS} = <S> V_{RL}\$ , which in turn implies <S> \$R_S=R_L\$ .
Intuitively: when you raise the load resistance, you are increasing its share of the voltage (and thus power) versus the other resistance; but you are decreasing the total current (and thus power).
How do I compute the closed-loop gain when using an op-amp with finite open-loop gain? my future fellow Electrical engineers. I can't figure out how one gets -500(b) as the Open-loop gain. Using node analysis: \$i_1 = \frac{v_- - v_{in}}{100 Ohms}\$ \$i_2 = \frac{v_- - v_{out}}{100k Ohms}\$ and then substituting in \$v_- = v_+ - \frac{v_{out}}{A}\$ . As the open loop gain of an op amp is not infinite. However, using this method, I got a wildly different answer (assuming I solved for \$\frac{v_{out}}{v_{in}}\$ correctly). <Q> The question is not asking for the open loop gain. <S> The question is telling you the open loop gain is 1000. <S> Let's assume Vout is 1V. <S> Then V- must be -0.001V (because of open-loop gain). <S> Then the current through the 100k will be 1.001V/100k = 10.01uA. <S> The same current flows through the 100 Ohm resistor. <S> So Vin is -0.001V - 10.01uA <S> * 100 <S> Ohms = <S> -0.002001V. <S> So the closed loop gain is 1 /(-0.002001), which is about -500. <S> In an ideal op-amp, the gain for this inverting configuration would be Gideal = <S> -R2 <S> /R1 = <S> -100k/100 <S> = -1000. <S> There is also a general formula for op-amps when open-loop gain is not infinite. <S> The formula is: Gain, G = <S> Gideal <S> * <S> ( A / <S> (A + 1 + R2/R1)) <S> Where R2 is the feedback resistor, R1 is the other resistor, <S> A is the open-loop gain. <S> This also holds true for non-inverting op-amps. <S> If that formula looks familiar, maybe you were supposed to memorize it for this class. <A> You have all the necessary equations: \$i_1+i_2 = <S> \frac{v_--v_{in}}{100}+\frac{v_--V_{out}}{10 <S> ^ <S> 5}=0\$ <S> Multiply by \$10 <S> ^ <S> 5\$ <S> \$ 10^3 <S> ( v_--v_{in})+(v_--v_{out})=0\$ substitute: \$ <S> v_-=-\frac{v_{out}}{A}=-\frac{v_{out}}{10 <S> ^ <S> 3}\$ <S> \$ 10 <S> ^ <S> 3 <S> ( -\frac{v_{out}}{10 <S> ^ <S> 3}-v_{in})+(-\frac{v_{out}}{10^ <S> 3}-v_{out})=0\$ but <S> \$ v_{out}>>\frac{v_{out}}{10^ <S> 3} \$ , <S> hence: \$ <S> (-v_{out}-10^ <S> 3\:v_{in})-v_{out}=0\$ <S> \$\therefore\: <S> 2\:v_{out}=-10 <S> ^ <S> 3\: <S> v_{in}\$ <S> giving: \$ \frac{v_{out}}{v_{in}} =-500\$ <A> IMHO <S> this is not the type of question where a "long" mathematical analysis is intended. <S> You can use the following approach to calculate the closed loop gain : <S> This is an inverting amplifier, with ideal opamp inputs (no current). <S> We do not care about voltage limits, we do a linear analysis. <S> The output voltage is A times (1000 times) <S> the difference on the inputs. <S> We imagine that the minus input is -1V <S> (this is an educated choice which I made because the + input is at GND <S> and I want a simple positive input differential). <S> The +/- <S> difference is 1 and the output is at A*1 or 1000V. <S> So the voltage across the 100k resistor is 1001V (1000V - (-1V) ). <S> The voltage across the 100 Ohm resistor is one thousand of that, so 1.001V (because the current flowing in both resistors is the same). <S> The voltage at \$V_{in}\$ <S> is therefore (-1.001V-1V)=-2.001V <S> \$V_{out}/V_{in} = <S> 1000V/-2.001V = <S> {almost} -500\$ <S> Conclusion: <S> the gain is close to -500, answer (b). <S> The trick is that I "force" the voltage on the opamp input, and I calculate the other values from there. <S> To be safe, you should verify the result by checking that -2.001V on the input and 1000V on the output are correct by finding the voltage on the negative input and make sure that it is -1V. <S> If not, you made a mistake in the analysis.
You are supposed to calculate the closed loop gain, given that the open loop gain is 1000.
How to determine direction to read resistor color codes This resistor can be read either way, so what is the correct value? Does the size of the line have any significance? <Q> The first thing to consider is that only certain colored bands are used to indicate tolerances; such bands are always read last. <S> The colors used to tolerances are as follows: <S> Brown <S> ±1% <S> Red ±2% <S> Green <S> ±0.5% <S> Blue ±0.25% <S> Violet ±0.1% <S> Gray <S> ±0.05% <S> Gold <S> ±5% <S> Silver ±10% <S> Since this resistor has a white band at one end, you would start at that end since it is not a valid band color to indicate tolerance. <S> Therefore this resistor has a value of 984 Ohms ±1%. <S> Hope this helps. <A> The correct value is whatever you measure it to be with, color codes help but they are not the final say. <S> Your case seems quite peculiar that indeed it seems it to work both ways <S> , I see it as a 5 color code myself <S> so I would parse this as 984 ohms. <S> Is it possible for you to measure with a tester to confirm the value? <S> no matter what we all say here that will be the answer. <A> While not always obvious it appears that the gap betwen the black and brown is slightly larger than that between the two white lines. <S> This is even more important when at times you have a resistor with a tolerance and <S> a temp co eff. <S> band. <S> Here are a couple examples. <S> Image from post <S> When you 5 band resistor is not a 5 band resistor
One item to also look at is the resistor band spacing.
Simplest way to reduce voltage from ~48v to ~36v (I'm very much a beginner with electronics, so I apologise in advance) I’m trying to use a 48v battery on my 36v ebike controller. The controller itself can deal with the higher voltage, but it has a hardcoded limit of 44v, which it reads from a wire coming from the display. So my aim is therefore to lower this voltage somehow. The parameters are: Input is 39-55v, output needs to be between 29-44v Current on the wire is 80-120ma It would be great it if this could be done proportionally, or with a constant voltage drop, instead of just regulating to a set output voltage Single component solution would be best, since it would be easier to wateproof for riding in rain Solutions I have researched so far: Tvs diode: Guy here used one for this purpose (52v battery but otherwise samesituation) and he said it worked But I’ve read that they’re not supposed to be used continuously like this, only for spikes in voltage - does that mean it could fail if used like this? Also have no idea how to choose one for specific voltage drop - have read many explanations and data sheets but don’t understand difference between clamping, breakdown and working voltage. The guy in the linked post seemed to choose based on breakdown voltage = desired voltage drop, but this doesn't seem to be what breakdown voltage should mean? Zener diode: To calculate the series resistor, I would need to know the resistanceof the load, correct? I’m not sure how to find this - do I measureresistance across the voltage sense wire and ground while the display Step down converter: I have one of these (appears to use a LT3800 ) This has a constantoutput and is large, so not ideal. But I would also worry that itwould draw too much current? I've tried to measure the current itdraws with no output, but I doubt that is particularly usefulinformation. I tried to figure it out from the datasheet, but I don'treally know what half the symbols mean Resistor: Every mention of using a resistor for this purpose (that I've seen)has said not to do it, but not said why (although I assume this is because fluctuating current would change the resistance too much?) Also wasn't sure how to select the resistor value for a desiredvoltage drop. Is the V in V=IR meant to be the drop, or the voltageof the circuit?) Voltage divider From what I’ve read, this doesn't work once you apply a load? Other ideas: Only just found these so haven't looked into them: using high powerLEDs in series , or zener + transistor circuit My main reasons for asking instead of just trying these things are: Not wanting to exceed the mA level that I have witnessed on the line,because I don’t want to break anything Not knowing whether my lack of understanding might lead to somethingbreaking Need something reliable, i.e. that won't fail halfway up a hill Sorry if I've missed anything obvious - I feel like I could have researched some of these things more, but I did try. And in fairness they are bloody confusing if you have no real prior knowledge! <Q> You are able to use a fixed voltage drop of about 11 Vdc at about 120 mA. <S> This is fairly easy. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The transistor is a Darlington device in a TO-220 package and has a reasonable gain of greater than 1000. <S> The Vbe drop is about 1.2V. <S> Choose the appropriate Zener diode for the desired voltage drop. <S> The total power dissipation is about (11V) <S> * (0.12A) or <S> about 1.3 Watts. <S> You will need s small heatsink to keep the temperature rise reasonable. <S> A small heatsink from an old computer motherboard or CPU is a good choice. <S> Note that the transistor tab is connected directly to the incoming supply voltage. <S> Don't let it touch Ground. <S> Note that there is NO current limit and no other protection. <S> It's up to you to keep bad things from happening. <S> Do NOT allow the output to short to Ground. <S> [Edit] From the comments: 1) <S> First thought was to simply use a 5W Zener diode. <S> But those can be hard to come by these days. <S> - it's easy to get increased surface area (small heatsink) and thus keep the transistor cool. <S> 2) I'm simply mentioning that there isn't any current limit in this solution. <S> If there isn't any chance that you will "Oops" and touch the output to ground, then don't worry about it. <S> We can also substitute a LM317HV regulator for the transistor - this does have significant over-current and thermal protection. <S> But the transistor is less expensive and may be easier to get. <S> Mention in your comments if you want to explore using a LM317HV. <S> Do note that any of that transistor family will work: TIP120, TIP121, TIP122. <S> The transistor is dropping only about 11 Vdc. <A> Probably the simplest solution would be a zener diode in series with the load, chosen to drop just the amount of voltage you want dropped (not the voltage you want to get). <S> No resistor is required. <S> In your case, a zener of around 11V would do. <S> The zener will need an appropriate wattage rating, as it will be dropping up to about 1.3W. simulate this circuit – <S> Schematic created using CircuitLab <A> We can design the heat-removal from the Zener. <S> Shall we do so? <S> We'll assume the leads of the Zener are copper, and are 1mm square. <S> Yes, they likely areround, but I'll let you insert a square-to-round correction factor. <S> Copper, in the default thickness of PCB foil, which at 1 ounce/squareFoot is 1.4 mils or 35 microns, hasthermal resistance of 70 degree Centigrade per watt per square of foil. <S> This assumes the heat enters one of the 4 edges, flows laterally thru the foil, and exits the opposite edge. <S> Thus if we place 30 squares end-to-end, the Rthermal will be 30*70 = 2,100 degrees Centigrade per watt. <S> We want to avoid that much temperature rise. <S> Lets design for 20 degree C rise. <S> And because we don't know how the Zener silicon die is attached to the 2 leads, we'll design this heat removal to be used on EACH lead. <S> If lucky, you'll end up with 20/2 = 10 degree C and the Zener <S> should be very reliable. <S> How to do this design of heat removal? <S> We are going to think about a 1mm^3 of copper. <S> We've assumed the leads are 1,000 micron by 1,000 micron. <S> Which is about 30 layers of PCB foil. <S> With a square being 1mm by 1mm. <S> The thermal resistance of each 1mm piece of the leads is 70/35 or 2 (TWO) degree Centigrade per watt. <S> [yes, I rounded 30 up to 35. <S> Its my math, and we should not carry along more precision, such as 2.2317 degreesC, than we deserve.] <S> The leads are our best way to remove heat, not air cooling, and not tiny pieces of foil soldered to short leads. <S> Again, the leads are the best way to remove heat, but ultimately we have to dump heat to AIR or to the PCB (air) or to metal regions of the chassis to move heat to the outside of the case. <S> Remember the leads are 2 degree per watt, whereas the foil is 70 degreeC per watt. <S> simulate this circuit – <S> Schematic created using CircuitLab
Next easiest is to use a smaller Zener diode coupled with a buffer transistor - the Zener handles only a few milliwatts and the bulk of the heat comes from the transistor
Is it possible to desolder the bond wires by overheating a chip? I was trying to solder some VQFN packaged chips for the first time using a hot air gun, because that's all I have. I set the temperature to rougly < 350°C and startet the process. Sometimes it wouldn't workout perfectly from the beginning on so I had to hold it for a bit longer (up to 30-50 seconds) or make some corrections using a solder iron. A lot of these chips were not working properly. Later on I figured out how to do it faster and manged to get two of them to work. Now I started to wonder if I really destroyed these chips by overheating them. I really don't think the silicon or any other material was damaged by this. But what about the bond wires? Is it possible that they desoldered/detached? <Q> But what about the bond wires? <S> Is it possible that they desoldered/detached? <S> The bond wires generally aren't soldered to the chip or to the package. <S> They're pressure welded. <S> If the bond wires are embedded in the plastic package material (I believe they will be for most QFN package types), and you heated the plastic beyond its glass transition temperature, you could well have caused the plastic to pull the bond wires apart. <S> But you'd likely see burning of the outer surface of the plastic before you did this. <A> With some caveats, it is actually possible to test if the bond wire is connected if you remove it from the PCB. <S> First of all take a known good IC. <S> Then use a volt meter in diode mode. <S> Connect the black probe to IC GND and the red wire to one of the pins you want to test. <S> On the known good IC note the forward voltage shown in diode mode. <S> You may also want to measure in OHM mode to see what it says. <S> Now, test with the exact same setup on the suspect IC. <S> If you get a substantially different result, the IC is probably bad. <S> If you get an open-circuit reading on the pin, then the bond wire could actually be broken. <S> Confirm in Ohm mode. <S> This test is not perfect. <S> But by and large if the wire is not broken, you will get some kind of reading with the Ohm meter either in Ohm mode or diode mode. <S> For sure if there is any conductivity to GND, then the bond wire is likely not broken. <A> Now I started to wonder if I really destroyed these chips by overheating them. <S> I really don't think the silicon or any other material was damaged by this. <S> But what about the bond wires? <S> Is it possible that they desoldered/detached? <S> No, <S> but if you didn't follow the reflow profile, you could have damaged the chip. <S> There is no guarantees if the reflow profile is not followed. <S> Typically this is not the case, I've soldered VQFN's with hot air and not had issues. <S> If you are having issues with chip death during reflow, then make sure you do these things: Follow proper ESD procedure <S> Follow the reflow profile <S> If you do the above things, then your chips are guaranteed to work. <S> For the people that I've consulted with that were having problems, they were usually not following MSL or ESD procedures. <S> Once those were implemented, the problems went away.
Make sure chips are not exposed to the Moisture Sensitive Level (MSL) beyond what the are rated before reflow (moisture can create steam and mechancially destroy chips, especially MEMs and optical parts)
Can a cheap transformer be used for twice the rated voltage? Can a step up transformer designed to step up from 110v to 220v be used to step up from 230v to 460v? (I'm thinking one of those cheap Chinese auto transformers from ebay, in the 1kva-2kva range) I expect it will tolerate a small over voltage, such as 10% or 20%, but could it do double (or more), or will it overheat pretty much instantly? The application is to step up single phase to input into a motor VFD to get the most out of it and drive a motor which can't be re-wired from Y to delta to run at 230v. I'm aware that this will only work with a single phase input VFD, or a three phase VFD de-rated appropriately. <Q> Only for about half a mains cycle. <S> ie <S> NO. <S> Even running a 60 Hz mains transformer on 50 Hz causes it to run hot (ask me how I know) due to increased magnetising current. <S> Power transformers are designed to use the core iron well (except i very special cases) and magnetising current is arranged to flux the core to the point on the BH curve where the core is starting to saturate and go into a non linear mode where current increases faster than voltage applied or than flux increase. <S> Doubling the voltage will drive any normal transformer deep deep deep into saturation, massively increase the current with no positive effects whatsoever, and destroy the transformer almost instantly. <A> You will cause the transformer core to saturate and <S> the windings will overheat and the transformer will be destroyed if you try this. <S> No, it will not work. <S> Not with a cheap transformer, not with a high quality transformer. <S> You can go down in voltage (from rated), but not up, certainly not 2:1 (for given mains frequency). <A> I tried that trick in years past. <S> One worked, one arced.
Answer, don't do it.
How should I understand FPGA architecture? I've been given the task to make a 2-bit adder by programming a FPGA. The FPGA is seen below: However, I don't even know how to begin this task, because I don't understand what I am looking at. What are all those green lines supposed to do, and what about those green and red shapes? I hope someone can clarify this for me, since I really want to understand it. PS: I don't know if it is essential in understanding what's going on, but this figure was included in the problemsheet as well: <Q> The green boxes are IO pins, the blue lines are wires, the red boxes are configuration bits, and the grey boxes are logic blocks. <S> The red boxes can supply a constant logic 0 or logic 1 to whatever they're connected to. <S> Each logic block implements a 3 input, 1 output look-up table ( <S> the combination of the logic levels of the three inputs determines which of the eight configuration bits is selected) and has a bypassable flip-flop. <S> Your post also shows the truth table that the LUT implements, indicating which configuration bit is selected for each combination of s0, s1, and s2. <S> For example, the red boxes at the intersections of the blue wires are connected to pass gates between the wires. <S> Setting one of those to 1 will connect the horizontal and vertical wires together, setting it to 0 leaves the wires disconnected. <S> Looks like they want you to add {a1, a0}, {b1, b0}, and ci together. <S> Here's an example of how you can implement a 3 input OR gate: <S> All blank boxes are assumed to be logic 0. <S> This takes the 3 inputs a0 b0 and ci, computes the logical OR, and outputs the result on a free pin. <S> The main things to note are how the configuration bits control the pass gates to connect the three input signals to the three inputs on the logic block and the output to a free output pin, and how the logic block implements the OR functionality - 0 when all inputs are 0, otherwise 1, with the flip-flop bypassed. <A> You posted your own explanation. <S> Take a closer look at your own image: <S> The red box is meant as a label box for you to write into with a value or signal, and represents the signal that controls the switch that connects a horizontal wire with a vertical wire (the green lines). <S> The horizontal wires and vertical wires are not connected at the junction when they cross unless the switch (transistor controlled by the value in the red box) does it <A> The green lines are wires, the red boxes are connections, you can connect a green wire to a block with a switch. <S> The switch is in the red block and it can connect two wires together if enabled. <S> This is how many modern FPGA's work. <S> But instead of having to do this by hand, a hardware synthesizer figures it out for you. <S> Heck, by the time you finish this assignment, you could write your own basic hardware synthesizer!
What you need to do is write a 1 or a 0 in each red box so that the input signals in the green boxes at the top get sent through the logic blocks, which you'll need to configure to implement the necessary logic to perform the operation.
3 phase Motor connection, 230V delta or 400V star? I have a motor with specifications: star 400V delta 230V and motor is of course 3 phase. Now the question is, how can I connect the motor to delta connection if I have only 400V line to line voltage? Where can I get 230V line to line? Will I damage the motor in delta connection? Is delta configuration only meant to be using with VFD in that case? <Q> Now the question is, how can I connect the motor to delta connection if I have only 400V line to line voltage? <S> Wire <S> the motor in star and connect it to your 400 V phase-to-phase supply. <S> Where can I get 230 V line to line? <S> You can't. <S> Will I damage the motor in delta connection? <S> Yes. <S> You would be applying 400 V to a winding rated for 230 V. <S> Is delta configuration only meant to be using with VFD in that case? <S> No. <S> It is meant for a 230 V phase-to-phase supply. <A> Now the question is, how can I connect the motor to delta connection if I have only 400V line to line voltage? <S> If you have 400 V, use the star connection. <S> There is no reason to use the delta connection and the motor will draw too much current and overheat if you apply 400 volts to a connection designed for 230 V. <S> Where can I get 230V line to line? <S> There places in the world where 230 V, 3-phase is available and not terribly uncommon, but if you don't have it and have 400 V, there is no reason to find it. <S> Is delta configuration only meant to be using with VFD in that case? <S> The delta configuration is for people that have 230 V 3-phase. <S> However you could use it with a VFD if you want to operate above the rated frequency. <S> You could probably go 25% above rated frequency and voltage, but the motor bearings and rotor balance are probably not adequate for any speed higher than that. <A> If the motor is designed to run in star a 380V 3-phase power supply, then it cannot be connected in delta on the 'same' supply. <S> This is similar to applying 380 volt to 220 v windings so clearly the motor would fail. <S> The solution is either to get a 3 phase step down transformer to get 220 3 phase voltage and you need to calculate the ratings if the KVA of the transformer based on the load. <S> OR get an inverter <S> Hope <S> the answer is useful and clear
, simply provide a 220V single phase (Line and neutral of the 380V supply) to it and get a 220V 3 phase.
How to connect LED to output device to show the output state without any voltage drop so that it can be sensed by MCU? I am using an NE555 timer to get output from one of my output device. The basic function of NE555 timer here is to toggle the output of my output device which was successful but I need to attach a LED to the output of NE555 to show the output state. When I directly connected the LED in parallel to the OUTPUT pin of NE555 and Measured voltage across OUTPUT pin and GND the voltage drops to 3.2 V but I need it to be 5 Volt so that I can sense the output of NE555 using MCU. Can anybody let me know how to connect an LED to the output of the NE555 (Source) without any voltage drop? Regards,Mr.B <Q> From the NE555 datasheet: we can see that with a 5 V supply we can expect around 3.3 V at the output when loaded with 100 mA. <S> Your LED of course consumes less current but that will not increase the voltage by much. <S> So the behavior you see is to be expected. <S> I expect that even without the LED the voltage at the output will not reach 5 V! <S> That's a consequence of the design of the output stage in the NE555 chip. <S> You might not actually need a proper "5 V" one signal, <S> when a uC is running on a 5 V supply the actual decision point is at 5 V /2 = <S> 2.5 V, so 3 V might be enough. <S> That isn't a robust solution though, I would only rely on this for a prototype or hobby project, not some device which will be mass produced. <S> Solution 1: <S> That might raise the voltage enough. <S> Solution 2: <S> Use the CMOS version of the NE555 <S> , it is the ICM7555 that chip can pull its output close to the 5 V supply rail <S> provided you do not draw much current from it. <S> You will need to run the LED at less than 1 mA for that, with a modern LED, that will still be enough to see it light up. <S> If you really need more current through the LED, use an NPN transistor (BC547, 2N2222 for example) or N-channel MOSFET (2N7000 for example) to switch on/off the LED. <S> Solution 3: <S> Use a transistor to level shift the voltage <S> , that will invert the signal though. <A> Remember that the "discharge" pin (pin 7) is essentially a copy of the output pin, except that it is open-collector — it can only sink current, not source it. <S> But you could use it to control your LED without affecting the rest of your circuit: <S> simulate this circuit – <S> Schematic created using CircuitLab <A> For a quick and simple solution, (if you don't mind an inversion of the LED function), you could just add a pull up resistor to +5V then connect the LED between that resistor and the 555's output pin. <S> For a more modest LED current use a resistor of 270 ohms or more.
Remove the LED (use a transistor to switch the LED if you still need the LED) and add a pull up resistor (try 1 kohm) to the NE555's output.
Why does an optical drive use laser light? Nowadays, lasers are cheap, but back in the day when CDs were first introduced, they were expensive. What is the reason for using a laser instead of a normal light source in optical drives? <Q> This may be a boring answer for electrical engineers, but it is all about the optics. <S> The optical system for a CD looks something like this. <S> source <S> As can be seen from the figure, the light traveling to the disc needs to go one way and the reflected light needs to be directed towards the detector. <S> This is achieved with the polarizing prism, which forms an optical isolator together with the quarter wave plate. <S> Here is a close up of this part of the optical system. <S> source <S> Because the polarization of the reflected beam has been altered, the light undergoes total internal reflection and all the power is directed towards the detector. <S> This has the added benefit of the reflection not interfering with the transmitted beam transmission line style. <S> It should be obvious that this strategy only works with light having a very narrow bandwidth. <S> If you use a different frequency, the quarter wave plate will no longer be a quarter wave plate, due to the different wavelength. <S> Hence using a laser is the obvious choice. <S> The other reason you need to use a laser is for the detection of the digital data itself. <S> The reflected light destructively interferes near the edges. <S> source <S> The distance from a pit to a land is a quarter wavelength. <S> Hence the total extra distance travelled by the laser beam is half a wavelength. <S> As you transition from a pit to a land or vice versa, the focused laser beam will illuminate both the pit and the land. <S> Since the path difference is half a wavelength when the beam reflects from both a land and a pit, the reflected waves destructively interfere with each other. <S> If the light reflects from only a land or a pit, there is no path difference. <S> P.S. <S> I continue to claim that optics is electrical engineering, since light is, you know, an electromagnetic wave. <A> That means either a laser, or a simple lamp and an awful lot of optics. <S> Given that the thing needs to travel back and forth across the disc, and fit into a CD player, the laser is the simpler option. <A> TL;DR; The shortest answer to the "why laser" question is that a laser beam can be focused into a tight spot without significant loss of power. <S> The answer by @user110971 is very detailed (and interesting). <S> I thought I'd try giving a shorter and somewhat intuitive explanation. <S> A laser disk pattern can be imagined as a series of black dots and dashes on a white pattern (similar to a bar code). <S> We want to read them optically with a scanning device (as opposed to imaging, as done with smartphone barcode reading apps). <S> Scanning itself is provided by the rotational movement of the disk, <S> so all we need <S> is illuminate a static point and detect reflected light. <S> To maximise contrast, the illuminated area should be as small as one black dot: you hit a black dot, you get no reflection, you hit the space in between and get reflected light back. <S> To store a lot of bits on the disk, we want the black points to be small. <S> So the illuminated area should be small as well. <S> Optical physics theory shows that the tightest focus can be achieved using the so called Gaussian beams. <S> They can be obtained from conventional light sources (e.g. an incandescent lamp), but it would require severe spectral, spatial and polarisation filtering throwing away most of light power. <S> Not good for energy efficiency, thermal management and size. <S> Alternatively, one can use a laser, which happens to produce something very close to a Gaussian beam right out of the box. <S> BTW I wouldn't call lasers cheap. <S> Mass-produced consumer-grade laser diodes are. <S> But lasers in general are complex an expensive.
Because it needs to be bright enough to trigger a photodiode, and with a narrow enough beam to read each individual dot on a CD.
How much energy is stored in a AA battery when its voltage has dropped to 1.2V? The general question is how useful it is to use weak batteries with a boost converter - say, boosting 3AA of 1V each to 5V? According to this image it seems that only 5% of the energy is left when the voltage has dropped to 1.1V but about 50% when it's 1.2V and dead for most uses. Still this usefullness of boosting depends on its efficiency. Is there a rule of thumb when boosting is advisable based on battery type, the remaining voltage and the required current? <Q> It is hard to answer your exact question, since you want information on a scenario that has no commercial relevance, so companies are not going to test almost drained batteries just to know how much energy can be harvested from them. <S> Those discharge curves they publish are used to design battery-powered products: the curves allow to estimate how long a completely new battery can make the product work before the "low-batt" signal kicks-in. <S> Although, your question has no commercial/industrial relevance, it could have some environmental relevance: if your intent is to reuse almost dead batteries in lower power devices, yours is a good question, but a difficult to answer one. <S> There is no easy answer. <S> An alkaline drained battery cannot be characterized only by its voltage and internal resistance. <S> Its past discharge history can have some impact on the residual energy content, and so does its exact chemistry and internal construction, which depend on the manufacturer. <S> For what is worth, if you have some environmentalist concerns, you could do like me. <S> I try to drain the most out of old batteries: when the new ones are drained I measure their voltage and if it is above 1.1V (single cells) <S> I keep them for low power devices, such as kitchen timers/clocks, wireless computer mice or the like. <S> When they are too weak to work for these low power devices, I reuse them in a Joule thief circuit powering an LED as a night light. <S> Joule thieves are quite inefficient step-up converters, but they can be powered using cells with ~0.7V terminal voltage, and once powered they can drain the battery until it reaches something like 0.2V-0.3V <S> (YMMV, it depends on the actual circuit). <A> When the battery goes down, not only its voltage decreases, it's internal resistance increases as well, sometimes dramatically. <S> So the battery may show, say, 1.1V when measured with a multimeter (and almost no current is drawn from the battery), but can quickly collapse to less than 1.0V when under 200 mA load: <S> (Source: Battery university ) <A> Based strictly on the graph you provided: The curve intersects 1.2V at about 16 hours, by my reading, and hits zero at about 25.5 hours. <S> That's about 63% of the lifetime, or 63% of the coulombs. <S> Without trying to calculate area under the curve exactly, we can observe that it's monotone decreasing, and therefore the joules available before 1.2V are more than 63% of the total. <S> My eyeball says it's probably about 75%. <S> Call it nitpicking, but this is a significant difference from "about 50%". <S> But as others have said, the real problem with using a boost converter on a battery, if your load needs any significant power, isn't necessarily efficiency. <S> Your graph is for a constant-current discharge, but the lower the battery voltage gets, the more current the booster will need to draw from it to provide a constant power. <S> The more current the booster draws, the further the battery voltage drops. <S> For any given power level, there is a point beyond which the battery won't deliver that much power at any voltage (consider the maximum power transfer theorem with the battery's internal resistance as the source resistance), and even a 100% efficient boost converter will be no help. <S> And being imprecise, that point is really not very far past the 1.2V point, so your ability to work miracles is strictly limited.
Generally using a nearly-flat AA/AAA battery with a boost converter is not going to give you much, especially if the thing you're powering requires substantial power.
Is the electric current delayed simulate this circuit – Schematic created using CircuitLab -Suppose that the distance between "D1" and "D2" is too large. -Does "D2" give a delayed response compared to "D1"? <Q> In the case shown, with both diodes near the power supply but with a very long wire between them, there will be no delay. <S> However, in the case shown below, there will be a delay: <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The reason there's a delay in this case and not in the other is related to the fact that information must travel at the speed of light or slower, so <S> the information that the power supply has been turned on has to travel through the Extremely Long Cable™ at no faster than the speed of light. <S> For further research, I recommend websearching the term "transmission line". <A> (and in the ideal world wires are made out of superconductors) <S> For a real world diode there is a slight amount of parasitic capacitance and inductance that might cause a slight difference if you're looking at very small timescales, for most applications this would be negligible. <S> If you wanted to analyze this for a distance that is 'too far' then model it as a transmission line ( a long line of RC or RLC filters) or insert the calculated of resistance of the wire or trace between diodes between diodes <A> Yes, D1 will emitt light a very short time earlier than D2. <S> The current flowing through D1 will have to load the the very long cable's capacity before the same current will flow through both LEDs.
No, for an ideal diode (or LED) there is no capacitance or inductance and the current through both is equal.
How to find an MCU with specific characteristics? I am trying to find an MCU in which there are some specific requirements for I/O peripherals (such as UARTs, SPI, I2C, etc). I also want to be able to search by the footprint size. Should I go to each popular MCU website and search from their selection list or is there any other possible way to do that? <Q> The strategy would indeed be to go to the various manufacturer web sites and use their parametric search engines to narrow down the selection to 4 UARTS. <S> Each manufacturer will have a slightly different search facility so you have adapt to that. <S> One benefit here is that the number of main MCU manufacturers has been reduced in the recent decade due to companies either joining forces or some exiting the market. <S> For small footprints it can often be useful to consider the SiLabs 1 offerings. <S> The Leopard Gecko series of ARM core MCUs offer five UARTs in QFN64 or QFP64 packages. <S> 1 <S> No affiliation with company but user of their products. <A> There are a number of websites that do what you want; the ones I use most often (no affiliation) are Digi-key and Mouser . <S> Both of these offer parametric search by both functions and footprint. <S> You can also use individual manufacturer's sites, which will likely have more parts available, but distributors like those linked above sell parts from many different manufacturers, and allow you to directly compare parts from different manufacturers as well. <A> I don't think there are such website, at least they will not be supported by major brands, since it will show both their advantages, but also disadvantages compared to other MCUs. <S> One MCU that has 4 UARTS is the ATmega2560, known from the Arduino Mega 2560, which has 4 UARTs. <S> (see also the comment of Marcus Miller below). <S> You have to install it though (there is also an Android app which can do the same, with the same name).
ST.com also has many MCUs having (at least) 4 UARTs, and it has an easy search tool which can be found here: ST MCU Finder .
Heat-shrink tubing available as a roll like adhesive tape? Many people who have used heat-shrink tubing have probably experienced this: you forgot to insert heat-shrink tubing before soldering , and now it's too late! Indeed, once the soldering is done: you can't pass thin tube because of the connectors on both ends (example with a Macbook charger connector): you can only pass a large-diameter tube (because of the large connectors), but then the shrinking factor doesn't allow the heat-shrink tube to "fit" on the wire! Question: is there something available as a roll like adhesive/electrical tape (so that you can wrap some around the middle of a cable without having to "pass" it around the large connectors on both ends of the cable, see for example this video ) that would shrink with heat and be like traditional heat-shrink tubing at the end, i.e. a bit solidified/glued by heat? TL;DR: is there a mix between heat-shrink tubing and adhesive tape? <Q> If you are after rigidity and toughness then "silicone self adhesive tape", "self amalgamating tape", "self-fusing tape" (or some other name along those lands) will become hard. <S> It looks like a roll of silicone tape backed with a transparent separate plastic to stop it from sticking to itself. <S> It somewhat resembles something halfway between electrical tape and the the white teflon thread sealing tape. <S> You should just be able to find it in a hardware store. <S> Do a test run on scrap and let it sit a day before you do it on the real piece <S> so you know how it behaves while pliable and after it cures and how to best work with it. <S> You do not want to have to go in and remove it if you mes up. <S> People who complain it wont stick flat on a surface are not using it as intended. <S> It is meant to be STRETCHED and wrapped. <S> I dont know what makes it hard <S> but it does which surprised me too. <S> Try it. <S> I would rather remove heatshrink than this stuff. <S> I have never been successful in excising it whereas heatshrink is dead easy hence my warning of a practice run. <S> Also, you realize your problem can be fixed using heatshrink with higher shrink ratios right? <A> "Heat shrink tape" absolutely does exist. <A> Regarding your question about shrink ratios, the highest I've seen is 8:1 I think. <S> Most "high ratio" shrink tubes achieve that high number by being lined with hot melt glue, which gets squeezed down to a much smaller diameter than the actual tube's ID. <S> It makes for a waterproof and very robust seal that is seriously difficult to remove. <A> I've found Sugru to be useful in cases like this. <S> It's a silicone rubber that you can mold like putty. <S> Once formed, it will cure and develop a texture similar to rubber. <S> Mold <S> some into a flat strip, wrap it tightly around where the joint where you'd apply heat shrink, and let it cure. <S> You can use different thicknesses to adjust the stiffness, texture, etc.
Shrinking with heat makes no sense if its not a closed shape since it would pull apart since the heat activates both the shrinking and the adhesive. Just do a search using that phrase, and you'll find many vendors. It might be the mass fusing together to be thicker.
Can commercial 9v batteries deliver 2 amps? Are these commonly used, cheapish batteries able to deliver 2 Amps of current continiously and safely? I know that assuming 200mAh capacity the battery will be dead in 6 minutes or so. Example batteries: https://www.hepsiburada.com/varta-power-accu-ready-2-use-9v-pil-e-200mah-56722101401-p-OFISVAR56722101401?magaza=infstore&wt_gl=cpc.6803.shop.nelk.ofis-ofis-teknolojileri&gclid=Cj0KCQjwuZDtBRDvARIsAPXFx3B_oZsxHNskuf7E82mbtejUbONWIwsSxC2bBX2XlHX82fS7oRs51XwaArz8EALw_wcB#reviews https://www.hepsiburada.com/varta-2022-superlife-9v-pil-shrink-p-HBV000002B6KL?magaza=pilstore&wt_gl=cpc.6803.shop.nelk.ofis-ofis-teknolojileri&gclid=Cj0KCQjwuZDtBRDvARIsAPXFx3BOo0VFOX4w69U08zstTlcjGltLX2qWlco37IklfYpOs4wDDT8L-QcaArdfEALw_wcB <Q> No. <S> A common 9V PP3 battery, whether alkaline or NiMH or zinc-carbon, cannot provide anywhere near this much current. <S> Get a different power supply. <A> I found some data here: http://www.learningaboutelectronics.com/Articles/Battery-internal-resistance <S> The maximum output current is the short circuit current. <S> For example for a 9-V alkaline, this maximum output current would be (9V)/(2 Ohms) = <S> 4,5A. <S> But the actual delivered current will depends of course on your load. <S> You could force a 9-V alkaline to deliver 2A, but because of the internal resistance you would have a 4 volts voltage drop on the internal resistance, and only 5 volts would be supplied to your load. <S> Edit: this is a theoretical answer. <S> In practice, the battery would get very hot and may even explode. <S> It would probably damage the battery and more important, it could harm you. <A> 9 volt snap-terminal batteries are notoriously wimpy, both for the power they can supply and for the energy they can store for the volume. <S> I presume you want 2A at 9V, meaning that you really want 18W. <S> You probably aren't going to get that from an equivalent volume of dry-cell batteries at all <S> (1A is a lot to ask from a D cell). <S> If you can be flexible in the form factor and volume, I'd suggest you try to find some NiMH AA cells that'll deliver 10A, use a pair of them and step the voltage up to 9V. Or use a single- or two-cell LiPo pack, accept that charging and safety will be a concern, and build a circuit around that. <A> The only widely available batteries in the PP3 form factor that would come close to being able to deliver 2 A would be the primary lithium cells such as Energizer's L522 or UltraLife's U9VLJP. <S> These are only rated for 1 A max continuous discharge and have (different) internal overcurrent protection devices, but for short periods or maybe continuously at lower ambient temperatures you could probably extract 2 A. <S> The terminal voltage would drop to around 5 V. <A> For a few fractions of a second probably. <S> But it depends a lot on battery state, and probably brand and quality of battery. <S> Basically it is down to internal resistance under sustained near short circuit which I would not like to make any bets on.
You ought to be able to find a suitable battery that won't be much bigger than a 9V that will deliver the power you need.
Can the output of one logic gate serve as input to more than one subsequent logic gates? NOT gates take one input, AND and OR gates take two inputs. But can any gate have more than one output ? I've only ever seen diagrams where each logic gate has exactly one line coming out the right end. <Q> Some examples are: DS8921 <S> http://www.ti.com/product/DS8921# . <S> And AM26LS31 <S> http://www.ti.com/product/AM26LS31 <A> Yes, one output can usually drive multiple inputs. <S> The exact amount of how many inputs it can drive depends on the type of logic of the inputs (how much of a load it presents) and the output (how much load can it drive). <S> Sometimes these are stated directly in datasheets, e.g. "this output can drive X standard TTL unit loads" or "this input amounts to 0.5 standard TTL unit loads". <S> Sometimes they must be calculated from given values. <A> Yes it can. <S> However the more gate you try to drive, the more time it takes to do so. <S> This is related to input capacitance and resistance. <A> CMOS agates have a controlled output switch resistance to each supply rail for Pch and Nch <S> switches with RdsOn dependent on Vdd. <S> In general 74HC family @ <S> 5V is 66 to 50 Ohm range <S> + <S> /-50% nominal while Gate input impedance can a million times higher but with some pF of capacitance for traces and Gate. <S> Thus the Vdc drive of number logic inputs is practically unlimited. <S> But rise time will be reduced by RdsOn* Cout=T which is added to small internal latency delay time. <S> This is a concern if synchronous logic is being used at high speed. <A> Yes. <S> This capability is called the fan-out of a logic gate. <S> For example, a gate with a fan-out of 8 can drive the inputs of 8 (same family) other gates. <S> In the case of CMOS logic, the fan-out is generally limited by the capacitance of the load gates' inputs. <S> In the case of TTL, the fan-out may be limited by the load gates' input current rather than by capacitance.
There are gates such as differential drivers that offer dual outputs.
Are these 2 resistors in parallel? So I had a circuit to analyse and I needed to find the equivalent resistor and then I arrived to a confusion. Are R1 and R3 in parallel?Here is the circuit. simulate this circuit – Schematic created using CircuitLab <Q> Redrawing schematics is a great way to analyze circuits, but also an exercise in why schematics are drawn in particular ways — to more clearly communicate to other engineers. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The rearrangement above should be a little more clear. <S> If you trace a path from one terminal of the battery to the other, you can hopefully see that there are two paths (the split occurs at the junctions on either side of R1). <S> Series means one-after-another current flow (like a series of events or a television serial). <S> Just as when you measure voltage, where the value depends on your reference point, components can be series or parallel depending on what you are comparing them to: <S> You could say: R2 is in series with the voltage source, or R2 forms <S> a series-parallel circuit with R1 and R3, or R1 and R3 are parallel with each other, or R1 and R3 <S> are parallel with a voltage source and some resistance R2 <A> I see a few answers already, but none provides the definition of "parallel": Two two-pins components are in parallel when the voltage across them is the same Conversely, with "series", you have: Two two-pins components are in series when the current through them is the same Bear in mind that "the same" literally means "the same" in this context. <S> If you have two resistors with two different voltage generators connected to them, and the generators provide the same voltage, then the voltage across the resistors will numerically be the same, but it won't be the same voltage. <S> The definitions above also solve the confusion if you only have two components, as highlighted in the comments to this answer. <S> In that case, the components are both in series and in parallel, since the voltage across them and the current through them is the same. <S> Being in series and in parallel with something else at the same time is not impossible, a very common example can be a voltage generator and its load. <A> Yes, because there are two different flows: V1 -> R2 -> R1 -> V1 V1 - <S> > R2 - <S> > <S> R3 -> <A> R1 and R2 have their ends connected to the same nodes. <S> simulate this circuit – <S> Schematic created using CircuitLab Figure 1. <S> Remove R2 and V1 and it becomes very obvious that R1 is in parallel with R3. <A> Yes, R1 and R3 are in parallel. <S> Both their ends land at the same places. <S> As a mains electrical guy I'm not supposed to presume wires are zero ohms, but if I do, this becomes a fairly simple matter. <S> Conductance = 1/resistance. <S> R1 (50.5 ohms) has a conductance of 0.01980198 siemens . <S> R2 (55.83 ohms) has a conductance of 0.01791152 siemens. <S> In parallel, conductances simply add . <S> So 0.0377135 siemens. <S> Stated in resistance , R1/3 is 26.52 ohms. <S> R1/3 and R2 are in series. <S> In series, resistances simply add. <S> 26.52+1.54 <S> = 28.06 ohms. <S> This shunts a 10V constant-voltage supply, so we can plug 10V into Ohm's Law and done.
V1 R2 and the combination (R1, R3) are in series, but R1 and R3 are in parallel. Parallel means that current flows through two or more components at the same time (proportional to the component values).
Confusion in PID loop?for case of zero error? I am already studying PID control and some how to some extent I have understood it except one main confusion. When the difference between reference input r(t) and current output value y(t) is zero, e(t) will be zero and hence u(t) will also be zero, so how will then the plant act or work when its input u(t) is zero? <Q> It only implies that the output of the "P" process is zero. <S> Remember, <S> the "I" and "D" processes have memory — they depend on the past behavior of e(t). <S> u(t) is zero only if the sum of all three processes is zero. <A> There can be a couple of scenarios. <S> Consider a system \$\frac{1}{s (s+1)}\$ . <S> A PI controller is \$\frac{0.9 s+0.27}{s}\$ . <S> In steady state after the controller has made \$y=r\$ , the error <S> \$e\$ <S> and its integral are both zero. <S> In this case when \$e=0\$ then \$u=0\$ . <S> If there is a nonzero input going to this system, the output will keep on increasing. <S> For a system such as \$\frac{1}{s+1}\$ and PI controller \$\frac{1. <S> s+2.0006}{s}\$ , the error goes to zero in steady state, but the integral of the error does not. <S> This gets multiplied by 2.0006 and is the control input that maintains the output at the reference value. <S> The computations below are done in Mathematica. <S> The plots below show the error signals. <S> Both go to zero. <S> However the integral of the one on the left is also zero. <S> The integral of the one on the right is not zero but around 0.5 <A> Take a motor control PID for example. <S> The motor (once running) will have small load perturbations that will cause it to overshoot or undershoot the zero error case, so then the system will react and cause the motor to slightly overshoot in the opposite direction. <S> If you were to zoom in on a graph of the error, it would be little zig zags across the zero error line. <S> Assuming that it's been tuned properly. <S> Also u(t) doesn't go to zero when the error is near zero. <S> u(t) goes to the value that makes the error near zero.
No, e(t) being zero does not imply that u(t) is also zero.
amplify/increase current from 9v battery I have a working circuit that generates a signal (a range of slow and fast tones) and sends them to a speaker. I tried to switch to a piezo driver and speaker and was successful using the MAX9788 as the driver, however I was unable to make it loud enough. The MAX9788 requires 2.7V to 5.5V. If I supply it with 5V from a power supply and use a regular speaker then I am able to create the desired volume during which the power supply reads between 250mA to 850mA depending on the speed of the tone, i.e. steady and slow to very fast. My problem comes from when I supply my circuit with a 9V battery and use a linear DC-DC regulator to send 5V to the MAX9788 (I am using a L7805). I am able to hear the slower tones but at a much lower volume and never the faster ones. I have been able to increase the volume using a voltage follower with a darlington pair, my current during this time has never exceeded 50mA. I believe my mistake was using a linear DC-DC regulator and my next step is to use a buck convertor to hopefully get more current to the MAX9788. TLDR - my systems works if I supply the MAX9788 with a power supply but not with a 9V. My question is will this work? is there a better way to see the current I need on the driver using a 9V battery? <Q> If you use a buck switching regulator instead of a linear regulator like the 7805, you can get more current at 5 V than your battery puts in at 9 V. <S> But you can't get more power out than you put in. <S> So to get 850 mA at 5 V, you will need something like 500-550 mA at 9 V, which is more than it's reasonable to expect from a 9 V battery. <S> If you need to limit yourself to using a single 9 V battery, you'll have to re-design your circuit to use much lower power. <S> That probably means much quieter output. <A> In your case I would use a simple two cell 28650 and use a buck converter to keep a steady voltage of 5V.Using those cells is better and can deliver more current to your circuit - way better than a 9V battery. <S> regards <A> 8 Ohm speakers can be 4 Ohms DC <S> so a big cap to block DC reduces wasted power as they show in MAX9788 spec. <S> TO generate 5.5V max use only good batteries.e.g. <S> 3x C cells 4.5V or 4 x1.2V = <S> 4.8V NiMH. <S> or DIY discrete design below. <S> The driver should be a push-pull FET bridge going direct to the battery. <S> A Piezo Speaker or something >32 OHms would work better. <S> The fresh Battery battery can supply >1A short circuit which at <9V means the source ESR is <9 OHms. <S> So you can achieve max power transfer with an 8 OHm speaker but at 50% loss of energy (MPT Theorem). <S> A 2Ah LiPo or Li-Ion string of 3 cells can supply 10A with a 10% drop from 11.2V with an ESR of 0.1~0.2 <S> Ohm from the battery pack, resulting in less drop. <S> A simple CMOS Schmitt Oscillator and Biased FET Half bridge can work. <S> Dual Nch FET bridge needs a boost Cap-diode for the low side switching and high side Vboost <S> > Vbat. <S> Dual channel PCh+Nch needs steering bias so that both FETS are off when control voltage is at 50% to prevent shoot thru.(power short) <S> This is also called dead-time FET commutation control.
You could either use a big honking lantern battery, or switch to using a mains-powered supply (wall wart).
Impedance match Fishman guitar pre-amp to IRIG2HD I'm looking for some help in understanding impedance matching. I'm hoping one or more of you technically savvy electronics folks will jump in and give me a simple circuit recommendation. Specifically I want to try to build a passive "pad" that goes between the output of a Fishman Prefix Plus built-in guitar pre-amp and the input of an IRig HD2 guitar-audio-to-usb converter. My issue is that, as it stands, I have to turn the volume control of the pre-amp as close as I can to it's minimum value (like 0.01, which is iffy), and set the gain of the Irig input hard to it's minimum value to get any kind of decent sound. Anything else results in lots of distortion. I'm trying to get a clean guitar sound. https://www.fishman.com/products/series/prefix/prefix-plus-t-onboard-preamp/ https://www.ikmultimedia.com/products/irighd2/index.php?p=specs I get the feeling that by using the extreme lowest setting on the pre-amp I am not in the sweet spot of the pre-amp design. I feel like it makes the sound tinny ... I'm always lacking bass when I use the device (compared to just plugging my guitar into the Yamaha Stagepas PA). It's just my noob idea that the amp was probably designed to be used in the middle, rather than at the extremes of its gain settings. So, I'm thinking about making a passive "pad" that basically consists of two resistors. input O------- R2 --------+----------------O output | R1 |ground O------------------- +----------------O ground My basic, very limited, understanding is that such a passive pad consists of a voltage divider created by resistors R1 and R2. I really have no idea what values to use, although I sort of think their ratios should be about 9:1 where the value of R2 is about 9 times greater than the value of the "shunt" resistor "R1". Rather than just guess, I thought I would ask the community. The specs for the devices are copied from the user manuals Here is what I see: FISHMAN Guitar Pre-amp Nominal Input Level: -20dBVInput Impedances: 20MOhmOutput Impedance: "Less than 3.5kOhm"Nominal Output Level: -12dbV IRIG HD2 audio-to-usb converter Maximum Level: from 307mvpp to 8.36VppGain Control Range: 28.7 dBInput Impedance (guess): 380 kOhms I cannot find the actual input impedance specification for the IRIG HD2. There is a reference in the user manual that it is a "high-Z input". I am guessing it is simllar to the input impedance for the IRIG2 which IS specified as 380kOhms. So, given those specs, what should be the values of R1 and R2? Any other thoughts on what I'm trying to do here? I'd sure like to get a good sound and a feeling of control instead of a tinny sound and a pre-amp that is turned down to 0.01 .... <Q> The common approach is to use the matching T attenuator circuit. <S> The impedance is not critical, as its a line system instead of a microphone. <S> The simple resistor network I would recommend for this: simulate this circuit – Schematic created using CircuitLab <S> This gives about 16db (15.5 or so) of attenuation, to get back to nominal guitar level. <S> You could use a passive DI w/pad, and a xlr to 1/4 inch, but this is much less bulky. <A> What you are looking for is attenuation , not impedance matching. <S> To avoid loading down the source you want <S> the input impedance of your divider to be higher than it, and to maintain output level the attenuator's output impedance should be lower than the destination. <S> A simple voltage divider will work fine. <S> The guitar preamp has an output impedance of "Less than 3.5kOhm" so chose a value about 10 times higher for R2. <S> Make R1 1/10th of that <S> and you are good to go. <S> Or better yet, use a log taper potentiometer of ~35k (eg. 25k, 50k) <S> which divides by 10 at about half 'volume', and then you can easily adjust the attenuation. <A> Phrases like "nominal output level" are a bit ambiguous, what you really want to know is the "maximum" output level of the pre. <S> But let's assume it has about 20dB headroom above the nominal level, that means the max would be around +8dBV which is about 2.5Vrms. <S> So you are never going to overload the input to the USB converter, which can handle 8pk-pk, which is about 2.8Vrms. <S> (Presumably at minimum gain setting - the gain control is probably just an attenuator anyway). <S> The Fishman has op impedance less than 3k5 (probably to match with guitar amps which are usually around 1M ip impedance) and your guess of the input to the converter is 380k, which sounds reasonable. <S> I wouldn't expect less than about 47k. <S> So I don't think you should need a matching network. <S> Just connect them together, turn the Fishman to max, and use the input trim on the USB converter to minimum. <S> That should work out fine. <S> If needed turnthe volume of the Fishman down a touch, but you shouldn't need it all the way down. <S> I wonder why it isn't working. <S> Are you sure there isn't another issue? <S> A bad cable or something? <S> (Maybe the IRig people lie about their USB interface or the Fishman nominal level <S> is a lot more than 20dB less than max. <S> If you have a scope that would tell you.) <S> (Actually I have an acoustic with a Fishman and I <S> DI it into a Zoom recorder all the time with great results. <S> I think that this should work for you too.) <S> Perhaps a 47k log pot with would be the easiest. <S> (ACW pin to the pre, wiper to the USB and CW pin to ground of course.)
If you really do need a pad, a simple two resistor network with input resistance (i.e. the sum of the two resistors) at about 40-50K should work.
how to read this relay diagram I am a little confused on how this W171Dip-7 relay works. I know 2 is pos 5v and 6 is neg 5v. Looks like the signal from 1/14 will go to 8 / 7 when closed. But i cant tell if 13 is tied to 14 or just over lapping. Same with 9/8/7 or is 2 going straight to 13? Any help would be great! <Q> It's a reed relay in a DIP package. <S> Each connection goes to two pins, because that makes the internal lead frame more robust. <S> pins 1 and 14 are one of the reed contacts <S> 6 and 9 are the negative end of the coil/diode. <S> pins 7 and 8 are the other reed contact <A> Based on the diagram you provide, every connection on the relay is broken out to two pins. <S> Pin 1 is connected to 14 Pin 2 <S> is connected to 13 Pin 6 is connected to 9 Pin 7 is connected to 8 <S> This gives you options on how you actually connect to the relay. <S> You can energize the relay from pins 2 and 6, or 13 and 9, or a combination of them. <S> When I search for the W171DIP-7, I find the following datasheet. <S> Page 10 shows the pinout, and appears that pins 9 and 13 are not connected. <S> Maybe a different manufacturer? <S> W171DIP-7 Datasheet <A> simulate this circuit – Schematic created using CircuitLab Figure 1. <S> Redrawn relay pinout. <S> Figure 2. <S> Datasheet schematic. <S> But i cant tell if 13 is tied to 14 or just overlapping. <S> The schematic is inconsistent. <S> There is a dot to indicate a connection on 1-14 <S> but it is missing on 7-8. <S> If 1 was connected to 2 then 6 would be connected to 7 and the relay would short circuit the coil when energised.
pins 2 and 13 are the positive end of the coil and diode combo pins
External Memory IC which increments data on a clock pin I am searching, with no success, in multiple categories of the external memory IC market for a chip that can do the following: Store 1MB of data of 16-bit data This data is stored at specific addresses When I put it into read mode it uses it's 16 output pins to display it's 16-bit values in memory i.e. if the value at address 0x0000 was 0000 0000 00010010 this value would be on the pins in +V/0V. When I increment a pin with a clock it will change to address 0x0001 and display that value on it's pins. Preferably has unlimited write cycles like SRAM. Data can be volatile I can find sram which matches the address on it's input pins and then displays the value at that address to it's output pins. but it won't clock automatically to increment and requires specific reconfiguring of the address pins. An example that is close is the CY62256NLL-70ZRXIT. but it will not auto increment its address with an external input on its display pins. Anyone got any suggestions?I suspect it's maybe a type of ram that I am not aware of. <Q> For example, IDT has a line of chips of that type. <S> Unfortunately, the 1 MB parts (512k × 18 bits) have prices on the order of $200 in small quantities. <S> But the rest of your functional description is so vague that I can't be sure. <S> Also, if you have an FPGA in your system anyway, its on-chip block RAM can be configured to do the same thing (assuming that there's enough of it). <A> What you're describing doesn't exist as a single IC, as far as I know. <S> I have never seen such a device in the last few decades. <S> That function can readily be implemented with (a) a RAM chip and several 74xxx logic ICs or (b) a RAM chip and a cheap CPLD. <S> So there would be little incentive for an IC manufacturer to produce a dedicated chip with few and obscure applications that could be implemented with only two ICs. <A> I'd suggest this 10nS SRAM http://www.issi.com/WW/pdf/61-64WV102416DALL_BLL.pdf with a 20-bit counter in front of it. <S> Perhaps one made of a loadable counter so that you can set up the starting address from you wish to start displaying data, such as 5 of these daisychained: <S> http://www.ti.com/lit/ds/symlink/sn74hc193.pdf or 3 of these much pricier 8-bit parts http://www.ti.com/lit/ds/symlink/sn74as867.pdf <S> Similar parts are available in other logic families for different speeds. <S> Might also be less expensive to run two 1M x 8 parts in parallel.
You appear to be describing a FIFO chip, and contrary to the other answers, they do exist as COTS products.
How to inflict ESD damage on a board? I have a mature product (designed by me) on a PCB. For science, I would like to induce Electrostatic Discharge (ESD) damage to the board so I can see how it behaves afterwards. This test would be purely academic. One idea that comes to mind is to apply triboelectric effect using a vacuum cleaner. That seems like a plausible idea, since some users really use a vaccum to clean their board. What are other "average Joe"-methods (for example: wrap the board in plastic) to inflict ESD-damage? What are some exaggerated methods to inflict damage? (Like using a raygun or something other silly method) <Q> My go to method is my ghetto ESD-gun (a cheap electronic lighter)ala - Long neck lighter Split it open, strip the wires and zap away :) <S> It should produce a few kilo volts. <A> Scotch tape being peeled was being investigated for defribillators. <S> They also produce xrays when peeled in a vacuum apparently. <A> Bring your board to that room and repeat using the board instead of the doorknob. <A> If you have an old CRT TV or monitor gathering dust, there is a 2nd anode supply of 15-30kVDC typically. <S> Potentially lethal, so make sure you read up on the appropriate precautions. <S> Old-style (non-electronic) oil ignition transformers produce about 10kVAC at 10-30mA and are similarly potentially lethal. <S> They supply enough power to make a Jacob's ladder (the arc has to be of sufficient intensity to heat the air enough to drive it upwards for the ladder to work). <S> An old disposable camera (if you can still find one) typically has an electronic flash circuit that generates a few hundred volts and a storage capacitor that can deliver perhaps 5J for a small one (120uF charged to 300V), from a 1.5V battery. <S> The above goes well beyond what you would expect from static discharge, in terms of energy, also perhaps peak current and voltage. <S> If you want to do scientific studies of ESD, you should use a standard ESD test circuit (some resistors and a high voltage capacitor of something like 100pF). <S> You'll also need a DC high voltage supply to charge the capacitor, usually to some kV. <A> A targeted zapping with a "gun" as Sorenp mentioned is useful if you want to test something in particular, like a button that will be touched by a possibly charged-up user. <S> But if you want something more random/general, your typical office chair is a great generator of ESD and EMI, because of the fabrics and foam interaction, plus good isolation from ground because of plastic wheels, plus typically dry environment. <S> In fact, a good percentage of chairs keeps snapping and cracking for a while after the sitter leaves - just listen! <S> So: if you are "lucky" to have such a chair, just sit, shuffle a bit, stand up and touch your circuit; or put it on the chair while connected. <S> More detail at http://www.emcesd.com/pdf/uesd99-w.pdf <A> You've never taken a sweater off in a dry room, then? <S> I'd think this is a pretty easy way to generate static charges without really moving much. <S> Also you can just rub wool on cotton, but the sweater method seems pretty tried and true. <A> Having had a lot of hassle with ESD inflicted damage of circuitry I'd like to add my 2 cents. <S> There are a lot of methods described in other answers which will work for sure to generate discharges which can damage electronics. <S> But what exactly is it <S> you want to know? <S> If you cannot measure or reproduce the damaging event with some exactness, the gain of knowledge is going to be limited. <S> The damage depends on a lot of variables: Voltage Charge Series resistance in discharge circuit Behaviour of surrounding circuit <S> Possible pre-damage of compontents due to other ESD events or different stresses <S> Furthermore it is very difficult to find out, if you have successfully inflicted any damage. <S> Single MOSFETs may show significant change of gate resistance upon a single ESD event. <S> But little more complex device are very difficult to analyse and may hide an initial damage until they are completely broken by other events or other consequent deterioration to happen. <S> E.g. by measuring your ESD source, or even probing the damage capability with a set of identical MOSFETs (not that expensive) <S> Make sure you have reliable means to detect a damage <S> If you do not take those measures, the outcome of your experiment will be mostly useless in my eyes or the rationale might be reduced to "hey, they always were talking about this so called ESD damage. <S> I tortured my board and I think perhaps there is such thing as ESD damage. <S> Perhaps, probably, eeeh, perhaps."
Dry room + carpeting Find a room where you walk across the carpet and touch a doorknob and get a little zap . What I want to express is: Before fiddling and trying to damage your board cleary define, what exactly you want to find out Design your experiment in a way you can repeat. Make sure your device hasn't been damaged in advance (can be very very difficult)
Is it safe to use hot glue in electronics? I know question seems too basic to ask but I couldn't find any specific information anywhere . I want to use hot glue gun to stick components and cables into cardboard to prevent them move or break but someone said hot glue is electrostatic and it'll cause problem. So can I use it or should I switch to alternative ways ? Circuits I'm talking about MOSFETs or IC circuits and hot glue will directly touch into pins. <Q> It's great for sticking things down when building prototypes, but I certainly wouldn't use it in any sort of production environment — there are far better choices. <S> You probably do want to make some effort to keep it off the pins of your devices anyway, because if you subsequently need to (re)solder those pins, the high temperature of the soldering iron will either melt or break down the glue, creating a bit of a mess. <A> Dave means it is both a good electrical and thermal insulator so local heating will melt it and not a great adhesive if running hot. <S> Use Polyurethane if you need it. <S> PLastic will hold a charge, but will sync more current than air for Miller Capacitance but not as much as the semiconductor internal Miller Gate capacitance Not useful in industry <S> but if cold OK for a prototype. <S> ONLY PU or polyurethane is used for structural THT. <S> Plastic has some relative dielectric constant of about 3 to 4 x air. <S> for crosstalk capacitance, but the electrode gap wide/gap determines that value per unit length. <A> I used the stuff as a board coating in years past, even in production. <S> Better than anything I've ever seen. <S> Fast "drying", very low leakage, cheap, no noxious outgassing.
Hot-melt glue won't cause problems directly, but it also has no static-dissipative properties (good insulator), so it won't prevent problems, either.
Why do microwave ovens use magnetrons? With a lot of advancement in solid state electronics and signal manipulation, isn't it easier to simply take high amplitude signals with frequencies near 1 MHz and multiply the signals using diodes and frequency filters(LC/RLC) than to use a magnetron ? Since in frequency multiplication the amplitude is halved, we can take much higher amplitudes for the low original frequencies which is easier to do than amplifying a very high frequency signals. <Q> Magnetrons are cheap, reliable, pretty efficient (65% or so- <S> and they tolerate high temperatures so heat sinking is easy) and made with mature technology. <S> They are also reasonably tolerant of VSWR issues (if the user does not put a proper load in the oven, for example). <S> They don't really allow the frequency to change much without expensive mechanical tuning which is not available on consumer ovens- <S> so standing waves tend to appear in the oven. <S> To get 1000W-ish of microwave power any other way would be more expensive and possibly more fragile. <S> It's possible today , but too expensive. <S> Of course the semiconductor makers are always looking for the next big market, but the oven market is going to have to wait more years I think. <S> One of the few advantages they might have is to allow the frequency to be modulated which could reduce or eliminate the need for turntables and stirrers. <S> However that could have implications in other areas of the oven design such as the door, which is designed to attenuate one particular frequency. <A> The domestic microwave oven needs high power to cook the meal and high frequency to excite the water molecules. <S> What is not needed is high stability because the water energy absorption spectrum is broad. <S> ( 1 , 2 ) <S> The magnetron does this cheaply. <A> For a linear circuit, In the best-case , you can transfer 50% of your input power to the wave and the other 50% energy heat up your circuits. <S> " <S> https://www.microwaves101.com/encyclopedias/maximum-power-transfer-theorem " For high power amplifiers (with some tricks), the power efficiency is about 70-80% for example in class B amplifier. <S> It varies with an impedance of the load. <S> for this example, changing the food condition generate a new impedance for the circuit and this changes the efficiency. <S> So a hard mechanical body can withstand temperature when you transfer high energy. <S> It is a cheap technology and It has a longer history than many known circuits.
The low price and low duty cycle of the domestic microwave means that they should last for many years despite falling magnetron output.
why we need self-synchronization? I am new to electrical engineering. Just a question on lack of synchronization problem. Below is a picture from my textbook: I am very confused, time is unique to everyone in the world, so if both sender and receiver agree that, for example, the pattern of 0.001s represents a bit, so we won't have any synchronization problem any more, isn't it? <Q> if both sender and receiver agree that, for example, the pattern of 0.001s represents a bit, so we won't have any synchronization problem any more, isn't it? <S> This would work in theory, however it requires both sender and receiver to have infinitely accurate clocks that will not drift relative to each other. <S> Real world clocks always have some inaccuracy and drift. <S> Quartz oscillators are pretty good, especially considering how cheap they are, but they are not perfect. <S> There is no perfectly accurate clock with zero drift. <S> Say your sender and receiver both use 1MHz <S> +/-50ppm clocks. <S> In the worst case, one clock will run at 1000050 Hz and the other at 999950 Hz, so you get 100ppm drift between the two. <S> The only practical way to have two synchronized clocks is to actually synchronize them by slaving one clock to the other. <S> Also, time is not "unique to everyone in the world" as you say. <S> For example, relativity predicts that gravity influences time, so the frequency of a clock also depends on how far it is from Earth (ie, altitude)... <S> If the sender and receiver are communicating via radio, and they are moving, then a doppler shift will occur and transmission delay will change. <S> For example if a cellphone transmits at 2GHz from inside a car moving at 100km/h away from the cell base station, then the frequency the receiver gets will be doppler-shifted by about 185 Hz. <S> Also the transmission path length will change over time, which changes propagation delay. <S> The receiver must account for this (among lots of other factors). <S> Even if you had two perfect clocks, propagation delay would still have to be accounted for, say when the user replaces a 1 meter HDMI cable with a 2 meter HDMI cable. <S> That extra meter would add about 4.3ns delay (assuming 70% speed of light in the cable) <S> corresponding to about 15 bits (per lane) at 3.4Gbps. <S> That's why clock is usually transmitted with data (either using its own wires or embedded in the signal) to allow the receiver to synchronize its local clock. <A> You've touched on the answer in your question: "if both sender and receiver agree". <S> The problem is, how do you ensure that they both agree? <S> There are a few methods: <S> Use a global clock, shared between all devices. <S> Signals are then registered at the edge(s) of this clock. <S> The design must ensure that the signals are at settled values for a specified time either side of the clock edge (setup/hold time). <S> Agree a standard rate with some tolerance , with no clock shared between the devices. <S> If both the sender and receiver are at the same rate within tolerance, then data can be sent successfully. <S> There must be a signal that data transfer is commencing, and then the receiver can then register signals at the agreed rates. <S> Embed the clock within the signal. <S> An example is Manchester coding, which you'll can find in IR remote controls. <A> That can be done by distributing the clock to the receiver (as in SPI communication ). <S> So long as the signal and clock paths are similar enough in length that the clock edge and data arrive at close to the same time this will work well. <S> There are also asynchronous communication schemes where the receiver effectively recreates the clock locally from the data. <S> Provided the receiver and sender have clocks that are close to each other in frequency (in the 1% range is good), a single 8-bit byte (perhaps 11 bits with parity and start/stop bits) can be reliably transmitted. <S> For each succeeding byte <S> the receiving side effectively re-synchronizes with the transmitter. <S> Otherwise there would be a limit on how many characters could be transmitted in a row before the clocks got too much of a bit time out of sync. <A> We actually do this from time to time. <S> However, the fundamental issue with "if you see this 0.001 second sample, it means a '1'" is that finding a sample is a lot harder than already knowing where to find it. <S> You may need to sample several times faster than the data rate in order to get a clear picture of that millisecond sample. <S> Meanwhile, a system which has achieved synchronization doesn't need this, so it can transmit data at a much higher rate. <S> Nowdays, its very common for the transmitter to embed a clock signal in the data. <S> One of the more applicable methods use to do this is "comma codes" in 10/8 encoding. <S> 10/8 encoding is a way of encoding 8 bits of logical data in 10 bits worth of data sent on the wire. <S> It has some really nice properties, such as having no DC bias, which makes the physical hardware easy to build. <S> Comma codes are symbols that only appear at the end of a code word. <S> 10/8 encoding uses symbols with 5 1s or 0s in a row -- data never has more than 4. <S> Thus, if the receiver ever sees a period of high or low which is too long to be 4 bits, then it knows this must have been a comma code. <S> It can then synchronize to it, and start reading data. <S> Of course, this signal is 5x slower than the data it is embedded in, which makes it easier to pick up without having to sample at an incredibly high rate. <S> Once that sequence is locked in, data is transmitted at the high rate.
To get a sender and receiver to agree so that the receiver can accept valid data, the receiver has to know when the received data is valid for a given bit.
What happens if we take out two diodes from diode bridge? In all textbooks diode bridge that transforms AC to DC is drawn as having 4 diodes like this simulate this circuit – Schematic created using CircuitLab I wonder, what would happen if we replace, for example, D1 and D4 with usual wires? In order for current to flow, the circuit has to be closed. However, the diodes D2 and D3 should stop the current from flowing through the lamp in the wrong direction. Nowhere could I find full explanation of why isn't that so and in fact all four diodes are needed. <Q> Some common rectifier circuits, and your proposal at number 4, simulate this circuit – Schematic created using CircuitLab <A> simulate this circuit – Schematic created using CircuitLab Figure 1. <S> D1 and D4 have been replaced with wires. <S> LAMP1 has been disconnected for clarity. <S> It should be obvious from the modified schematic <S> that (D4) and (D1) have short-circuited your AC supply. <S> The lamp will never light (even if connected) and if the transformer is inadequately fused it will overheat and, possibly, catch fire. <S> Nowhere could I find full explanation of why isn't that so and in fact all four diodes are needed. <S> Bookmark this page! <A> It is not unusual in measurement applications (eg voltmeters) to use a bridge arrangement with resistors replacing two of the diodes. <S> This has the advantage of reducing the diode drop to that of a single diode rather than two diode drops, so it improves linearity at low voltages.
If you replace D1 and D4 with wires you will have a direct path that will short the ac supply (V1).
In its former, pre-burnt-out life, was this thing a capacitor? I've a problem with a ventilation system that stopped working shortly after power-on. I've pulled out what I believe is the power supply board and I can see an obvious problem, but I'm not entirely sure what the component is near the center that set on fire. Physically it seems virtually identical to a blue disc-shaped, two-legged one on the far left of the board between U4 and U3. That component is labelled C15, and the burnt out one is labelled RV1. Is there any hope of identifying the damaged one or do I need a manufacturer schematic to know the specifics (E.g. capacitance)? And a closeup with flash: Edit: couple of additionals Edit2: my neighbour has an identical unit. I was able to photo his board without disturbing it, alas no markings: <Q> It almost certainly was an MOV, Metal Oxide Varistor, a type of transient suppressor. <S> All of this points to primary-side protection. <S> In the center photo, the signal path is from the input wires, left through the fuse, up to the first layer of capacitors (both line-to-line and line-to-earth), right through common mode choke L1, then through another layer of capacitors, then down through the bridge rectifier. <S> If the supply is a universal-input type, then the MOV is rated for something in the 270-300 Vac range. <S> The physical size determines how large a transient it can absorb. <S> You can get approximately the same protection by matching its diameter. <A> - after looking at the creapage clearances in the layout, it’s not up to code or zero margin at best. <S> scrap it. <S> Grid fault transient blew up your AC MOV clamp with only a CM filter choke to limit current. <S> Better designs have DM or higher inductance differential mode chokes to limit current. <S> so these 10 us transients at >1kV to 6kV have the voltage that is filtered and does not reach the unprotected over voltage since then clamp level kicks in across line or to earth. <S> but if The current limit in the choke is not enough, it breaks-down, arcs across and burns up. <S> when the MOV breaks down it safely absorbs a certain number of Joules by rating. <S> but if the voltage rises faster to exceed breakdown rating, then the coil arcs and then follow on current of the grid until <S> zero crossing can/willburn a hole in the board. <S> this design might have been passed but your incident level was worse. <S> it can be repaired with new MOV’s , and test parts and diode bridge, but I doubt it will be done 1st try. <S> all bulged e-caps are dead and maybe IC’s <S> florida users see this with substandard PS products often. <A> "RV" probably refers to a varistor. <S> They are used to protect equipment from over voltage coming in through the power supply. <S> Yours is in the area of the connection to line voltage - the green, blue, and brown wires there close by. <S> It's hard to tell if it was successful or not. <S> Varistors are intended to catch transient events (short peaks of voltage beyond the the normal line voltage.) <S> C5 (the electrolytic capacitor by the blue wire) looks to have taken some damage as well. <S> The top should be flat, but it appears to be bulged in the picture. <S> Electrolytics that fail usually bulge their lids. <S> The first thing to do is to find out why there was such a prolonged high voltage event on the power line. <S> Once you find and fix that, you can remove RV1 and replace C5. <S> The circuit can operate normally without RV1, though it will be unprotected against the kind of thing that killed the first one. <A> My similar power supply burned up that component too. <S> My burned up, leftover disk measured 10mm; a similar power supply that I opened for comparison had a 15mm item in that location labeled NTC 5D-15 which is a 5 ohm thermister rated for 6 amps. <S> I surmise the 10mm zombies on my board <S> were 5D-10 (3 amp) of the same item, two of them, wired in parallel. <S> NTC 5d thermistor and <S> how to use it <S> I learned on the above posting that this thermister, in this circuit, is an in-rush current limiter that goes to 5->0 ohms in a few seconds. <S> It does indeed come first in the circuit (see the third pic below that shows the heat signature under the thermisters, but also shows that they were connected to the neutral 'line in' terminal). <S> My board was in a new installation providing LEDs with 12vDC. <S> I had 30 meters of LEDs with sales info of 1A/m, but that seemed high to me <S> and I expected it to be less although I hadn't measured it. <S> On my PS there was a hot smell in just a few minutes and it failed completely in a few hours (why I left it on, Nooo one Knoows!). <S> My board also showed heat under the Bridge Rectifier. <S> I do not think the load of the LEDs caused the heat in the bridge rectifier because the 40A power supply that I opened to find what it was that failed (the thermisters) has that same bridge rectifier in the nominally same circuit and it has been running the load for several weeks while repair parts were on order so the bridge rectifier itself (rather than too many LEDs) <S> may be the source of the overcurrent on the Thermisters... or it may be something else. <S> I will put in the (larger) thermisters and replace the bridge rectifier, and will run it under very light load and see. <S> I'll get back to you <S> I hope.
I have received a pack of NTC 5D-15 and one of GBU-808, the bridge rectifier (in case it was the 'problem'). It seems your varistor has given up its life to protect your equipment. As burned as yours is, that "transient" seems to have been a rather long event - the rest of the circuit may have been damaged when the varistor burned through. It is on the AC side of a bridge rectifier, next to wires marked Line and Neutral, and next to an AC-rated film capacitor (the yellow box) that is part of the power line input noise filter. The Power supply was rated for 33 amps (400W).
Is there a switch that can be made when it is spun around a given axis, like a gyroscope solenoid? Given in the title, I'm looking to design a small embedded system that only turns on when the part it's inside is spinning. The electronics is completely enclosed. I've done some googling for rotary switch, or gyroscope solenoid but to no avail. Do these things exist? I'm looking for some kind of relay to make a battery connection. Needs to be miniature i.e. MEMs sized. At a guess, maybe 100RPM turn on speed, max 24,000RPM. Thanks in advance <Q> If I liked mechanics, I'd just have a weight attached to some spring, mounted so that the centrifugal force will pull it radially and hit a tactile switch. <A> I'd go with something with no moving parts if you can. <S> Maybe an optical reflector, with a reflective surface mounted on the spinning part. <S> You can use a retriggerable one-shot to keep the output high until the spinning stops, and you can use the output to drive whatever type of relay/switch/FET you need. <S> One thing to watch out for is if it stops with the mirror on top on the optoreflector. <S> The one-shot would need to be edge triggered. <A> Instead of a gyroscope, just get an accelerometer and mount (one of) its sensing axes <S> radially – being spun causes a centrifugal force, which is measurably an outwards acceleration. <S> Many accelerometer ICs come with an interrupt pin that you can use to wake up a microcontroller. <S> Gravity is not really a problem – if you can mount your accelerometer at a radius sufficiently sized that the centrifugal force outshines gravity. <A> You are looking for a centrifugal switch. <S> A centrifugal switch is an electric switch that operates using the centrifugal force created from a rotating shaft, most commonly that of an electric motor or gasoline engine. <S> The switch is designed to activate or de-activate as a function of the rotational speed of the shaft. <S> Wikipedia Just go to Amazon and have at it using that terminology. <S> Looks like they are pretty reasonably priced. <S> $9.00! <A> It is a cool feature that even turns off the system oscillator. <S> It would not turn on unless the pcb is spinning and even if the pcb were stopped exactly on the permanent magnet, it still wouldn't turn on because there is no change in flux. <S> Basically a one-coil motor and the output voltage from the coil turns on the MCU. <S> No mechanical switches or springs. <S> Super low energy consumption when not in use. <S> Very compact. <S> You'd have to size the inductor and magnet to generate the proper turn-on signal, but that's easy to do. <S> You'd probably use a 3.3V MCU <S> so the inductor/permanent magnet would be as small as possible. <S> Here us a quote I found:PIC microcontrollers’ Sleep feature is an extremely useful mechanism to minimize power consumption in battery-powered applications. <S> In Sleep mode, the normal operation of a PIC microcontroller is suspended and the clock oscillator is switched off. <S> The power consumption is lowest in this state. <S> The device can be woken up by an external reset, a watch-dog timer reset, an interrupt on INT0 pin, or port-on-change interrupt. <A> Vibration Sensor will make when the external spring touches the internal rod. <S> If this was placed parallel to the rotational axis but non-concentric, a low rotation would produce enough force to make it. <S> I assume that gravity however is not enough to make the switch! <S> (Image source: <S> Adafruit: <S> Fast Vibration Sensor Switch )
You would have a permanent magnet fixed on the outer casing and then when the circuit board spins, the inductor will sense a change of magnetic flux and create a voltage that gets sensed by the wakeup pin of the microcontroller. Another solution is to use a microcontroller's sleep mode.
Using two 12V batteries in series with both a 12V and 24V output I have a tornado shelter that I have wired for both 120V AC and 12V DC. All of the lighting is via 12V DC while the AC is used for recharging the batteries and running a 1500 VA UPS system that will power a small TV and radio. The UPS system normally uses two 12V SLA batteries in series. To increase the run time of the UPS I want to use auto batteries instead of the SLA batteries. In order to cut down on the cost of batteries I want to use only two auto batteries instead of three. One 100AH battery used for the main lighting and a 50AH battery in series with the 100AH battery to provide the 24 volts for the UPS system. The UPS system will only use the two batteries when the AC is out which is normally rare. Do you see any problems with this setup providing I install a diode in the UPS battery feed to prevent the UPS from recharging the batteries and use separate smart chargers constantly connected and recharging (as long as AC is available) each individual battery? I feel I have overlooked something but I can't seem to put my finger on it. <Q> They are not expensive and it simplifies things if someone else has to conduct repairs. <A> Verify that your charger outputs are isolated from ground. <S> You probably don't want to use generic automotive batteries -- you want to use deep discharge batteries, of the sort designed for RV lighting, golf cart driving, etc. <A> Two batteries in series with different loads on each will often lead to mismatches in aging and charging. <S> Different capacities = worse again. <S> But in your application this seems "reasonably well catered for". <S> This would usually be what you'd expect. <S> Chargers with a ground input lead MAY have noise filter capacitors from both inputs to AC ground and MAY connect this to DC negative out - but usually not. <S> A simple ohmmeter check will allow isolation to be checked. <S> The diode in the UPS lead will probably not cause a problem. <S> A P Channel MOSFET with low Rdson and appropriate connection* will allow a low loss diode to be formed. <S> [Drain to B+, Source to output, gate to ground as long as Vgs max >> 12V). <S> If discharge is often much lower than that then a deep cycle battery may be wise. <S> Even deep cycle batteries should not be taken below say 60% SOC (higher better, much higher much better) for longish life. <S> As Tim says - try to ensure that the chargers treat the batteries well. <S> Worst case capacity can be much reduced but this is unlikely. <S> Mor likely is lack of a properly applied topping / <S> boost charge after use and so a reduction in capacity with time, but usually not a vast one. <S> A <S> VERY OCCASIONAL deep discharge test - maybe <S> annually - will give you a reasonable indication of batter health. <S> Serious requirements end up with SOC monitoring and acid checking and .... <S> but that is probably beyond what you need here.
I would avoid all the possible issues of isolated chargers etc by having the 24V battery bank and running the lights from a 24v to 12v converter. The two mains based chargers need to have isolated outputs - ie not share a ground (or any other) connection between their output and input circuits. What you are proposing would not usually be deemed "good practice" BUT sounds reasonably workable in your case. If thelights never discharge the battery to more than say 90% of full charge then an automotive battery may last a long time. A Schottky diode will give less loss.
Practical considerations when using a large number of capacitors in parallel Capacitors can be wired in parallel for higher total capacitance and lower total ESR. What are some practical constraints/pointers for using many parallel capacitors (for example, 10, 25, 50, or 100)? Application example: using 30 330 µF aluminum polymer capacitors to replace two 5000 µF wet electrolytic ones inside devices intended for long-life at low-variance operating temperatures (for example, illuminated ocean buoy). <Q> As DKNguyen pointed out it is important to "account for" or "manage" failures. <S> We put capacitor banks on the 28 VDC power bus of spacecraft all the time. <S> EVERY cap had its own fuse, so if the cap shorted out - which was the predominant failure mode - the fuse would blow and take the cap out of the circuit. <S> We were also concerned about partial shorts of a cap, so we broke the entire bank into 2, 3, or 4 sections and each section was fed with a relay so that we could disable that section of the cap bank if there was a partial short. <S> If you can , use a cap that has detailed specs on internal resistance and inductance and make a model on Spice (I use LT Spice) and check out the response across a wide frequency range - maybe up to 10 MHz. <S> Thanks !! <A> Fusing is good, as stated by xstack . <S> Ripple current sharing can be an issue when AC currents are high and tracks are long. <S> Equalising trace impedances is good practise here. <S> If, say, one capacitor out of a parallel group of eight was near some hot choke and its ESR fell then <S> it would hog the ripple current and die young. <A> Any capacitor turning into a dead short for any reason will have access to all the short circuit current of all the other capacitors combined. <S> Which can lead to something that could be described as either a self repair mechanism (the shorted capacitor will be demolished and neutralized quickly) or self destruct mechanism (so will anything near it).
Cascade failure is theoretically possible (if another capacitor is shorted due to the damage). Thermal stability should be considered, so the capacitors should be at a relatively even temperature part of the product enclosure.
If a module is "arduino compatible", does that necessarily mean it isn't compatible with other microcontrollers? I'm trying to find a GSM module to pair with a PIC32 microcontroller for a project and I've found some nice ones. The one I'm really interested in is the SIM800C GSM module. Many of them say they are "arduino compatible." Does that mean they aren't compatible with other microcontrollers? As an example, take a look at this one . It says it's compatible with arduino and raspberry pi which is great but I'm going to be using the PIC32. Do you think it will work? <Q> The term "Arduino compatible" basically means nothing . <S> Many sellers use the term more as marketing ("You can make this work with your Arduino!") than anything else. <S> The Arduino is (usually) just an ATMega microcontroller on a board, that ATMega is very similar to many other microcontrollers. <S> Nearly all of them work with a supply between 2 V and 5 V as does the Arduino. <S> If a peripheral also works within that supply voltage range (and almost all of them do) <S> then you could call that peripheral "Arduino Compatible". <S> But it will work with almost any other microcontroller just as well including your PIC32. <A> To check if it will work with your specific platform you can check if specification are compatible with your platform. <S> For modules you usually check power supply range, interface type and interface voltage. <S> In your specific case these are: Power supply range: 5 V - 18 V Interface type: <S> UART TTL up to 115000 <S> baud <S> Interface voltage: <S> up 5 V <S> So it will work you PIC, and you can check same for your next module. <A> It will be certainly compatible with any microcontroller. <S> The keyword "Arduino compatible" generally means that some of the following are true (from most to least likely): <S> Module likely uses 5V signaling, or is at least 5V tolerant <S> It likely interfaces using one of the hardware interfaces available on Arduino (serial, I2C, or SPI) <S> There may be an Arduino library available <S> It may have pin layout that works as Arduino shield (but probably not, unless it's specifically advertised as a shield) <S> But, it may also be that none of the above are true. <S> Sellers oftentimes use "Arduino" as a catch all keyword to attract sales without any regard how well it works with Arduino. <S> As you can see, none of those features mean that it is proprietary to Arduino and won't work on any other microcontroller with appropriate hardware capabilities. <S> Whether there is an existing library for that microcontroller is another matter, you may have to implement one yourself. <S> But most popular micros will have libs for most common types of hardware. <S> Your specific item listed supports both 3.3V and 5V signaling <S> and it interfaces using TTL serial, which makes it hardware compatible with practically all general purpose MCUs.
When it say it is compatible with something you know that it will work with specific platform (if seller doesn't lie).
Two different ultrasonic range finders to minimize interference Basically, I am building a robot that needs to be able to detect the distance from multiple sides. My plan is to use multiple ultrasonic range finders on each side, but I'm worried about interference between the sensors. Do there exist sensors that output and receive different frequencies to ensure that they don't interfere with each other? <Q> The usual approach is to trigger only one sensor at a time. <S> In other words, time-division multiplexing, rather than frequency-division multiplexing. <S> Otherwise, even if the sensors aren't pointing in the same general direction, coupling through the structure of the robot itself can create unnecessary interference. <S> Just to put some numbers with that: Ultrasound requires about 6 ms per meter of sensed distance. <S> If you're interested in distances up to 2 meters from your robot, then each sensor needs 12 ms. <S> If you have 8 sensors, that's a total of about 100 ms, which means that each one can take 10 measurements a second. <S> This should be plenty for a robot that can move at up to several meters/second, depending on how quickly it can stop, of course. <A> The transducers exist, but the ranging modules do not (for the hobbiest range at least). <S> Selection amongst ranging modules is very low. <S> The selection is pretty much limited to ~40kHz for broad-beam long range detection and ~235kHz for narrow beam short range detection. <S> That's it. <S> Frequency determines range and cone angle so you will end up at similar frequencies if you are looking in the "same way" in multiple directions you will end up with similar frequencies which a normal simple ranging sensor won't be able to differentiate. <S> Your most practical approach would be to do what the others suggest which is to stagger your transmit-receives between your sensors. <S> If you go this far then you might as well go all the way and get wideband transducers (probably electro-static transducers) that are reasonably effective from 20kHz to 100kHz. <S> I would list the only manufacturer I know of that makes these available to hobbiests but that would be off-topic. <S> Using these ranging modules you can adjust the frequencies you transmit on the fly at for long range, wide beams or short-range, narrow beams. <S> If you get into more complex processing you can also do things like transmitting a swept frequency (chirping) to make the transmissions more unique so you can pick them out, or use phased arrays for beam forming. <S> Basically a lot of the fancy radar can do but with sound. <S> Big project though. <S> More complicated than whatever project you have going on right now. <S> But I'm getting some red flags because you imply that you need to be able to measure the distance between two sides almost simultaneously. <S> The most common reason for that is close-range obstacle avoidance which ultrasonics is not the best solution for. <S> But you never said why you needed to detect distance from multiple sides or what ranges you need <S> so I cannot be more helpful. <A> Dave's answer is certainly one way to go. <S> Using the same part which operates at the same frequency, but then multiplexed within time, you can achieve data acquisition from both sides. <S> However, considering you have to wait a certain amount of time between pings, it may limit your overall sampling rate more than you would like. <S> How much this matters depends on various factors like the environment and how fast your bot is moving.
In which case, there do exist sensors that operate at different frequencies so that you can achieve simultaneous operation and double your potential sampling rate. But if you have lots of time and patience then it is possible to go all out and build your own ranging module.
Why do some PCBs have the courtyard in the silkscreen layer? I've seen quite a lot of PCBs which have the passive components outlined all the way in their silkscreen. Something like this: Here there is an outline around each passive which goes (almost) all the way around, and seems to be sized exactly the same as the courtyard - the lines for components placed side by side overlap. This style is apparently very common in high-volume stuff designed and mass produced in China - in this case a Bluetooth module, but I've seen lots of other boards which are the same. Why is the silkscreen done in this way? Is there a functional reason for it? Is there some way it makes assembly or inspection easier at volume? <Q> Picture the workflow for initial prototypes on a long string of resistors like on that board. <S> If not for the outline, you'd be forced to start at one end and proceed to the other, or risk getting off by half a component. <A> I do this all the time, the main reason being it allows a nicer visual appearance. <S> The other nice thing about it is it enables a PCB designer to space components with minimum width between them. <S> Space can be a major factor on many PCB designs, nothing the minimum width can dance space (and make the design smaller) <A> Or to identify missing components. <S> For example, I see two sets of pads with no components installed. <A> Silkscreen borders make the PCB layout easily readable by humans. <S> They are not useful for automated PCB population or AOI. <S> Modules for electronics enthusiasts need to have readable PCB layouts because a lot of eyes will be looking at those PCBs. <S> Mass-produced PCBs in finished products which are not intended to be serviceable often have little silkscreen, sometimes only identification codes. <A> The courtyard and the silkscreen are different things. <S> The silkscreen is visible on the board, the courtyard is a design concept and visible only in the PCB Design Application. <S> Generally the silkscreen outline is slightly smaller than the courtyard outline. <S> Courtyards can touch each other, so if the silk outline would be the same, two resistors would have a shared silk line if their courtyards touch. <S> So the silk line must be a bit on the inside of the courtyard to make sure that there is some space between the silk lines of two different components as you see on boards. <S> The silk outlines and courtyards are suggested by the IPC-7351C document. <S> In IPC-7351B the courtyards were rectangular, they can now be "arbitrary" and more closely follow the component's outline. <S> The silk outline for resistors, diodes and capacitors are not rectangular either. <S> Below you can see a detail of one of my boards. <S> I haven't updated the outlines of all my components yet - the lines in blue are the silkscreen lines, the lines in grey are the courtyards. <S> You can confirm that this is inline with my explications above. <S> Ben Voigt's remark caused me to look in more detail. <S> The picture has some cases where the shared lines are larger (around the crystal) and other cases where the lines are smaller (between the columns on the far right).So it seems that the designer may have done one or more of the following: <S> Not use courtyards and only had silk lines, using them as some kind of courtyard; Not respect design rules if he did have actual courtyards. <S> Had overlapping "courtyards" (for the cases where the silk line are smaller) <S> - and this resulted in the production files being automatically adjusted to avoid having silk on the pads (these adjustments may be made by the PCB design tool, and are in my experience also applied by the manufacturer).
Makes visual inspection of correct component placement easier to spot.
Why use a PMOS transistor instead of a diode for ESD protection? I am working on updating a PCB design by adding ESD protection to the power supply. I see that an experienced engineer at my company has placed a PMOS transistor rather than a series diode, and I'm not sure why. The PMOS has the drain connected to the power supply, gate connected to ground, and source connected to the load. What would be the benefits to doing one over the other (PMOS vs. diode)? Am I way off base thinking that they are interchangeable? <Q> The diode is more simplistic, but it is lossy as it will drop some voltage across it (0.4 V to 1.0 V, depending on the type). <S> The p-channel MOSFET should have a much lower voltage drop if chosen properly. <A> The MOSFET or diode are probably not being used for ESD protection. <S> ESD protection usually uses a diode ( TVS ) in a configuration like this: <S> Source: <S> https://www.semtech.com/technology/esd-protection <S> Both circuits provide reverse polarity protection. <S> The diode has more loss (the smallest diodes have 0.2 V, and silicon diodes have 0.7 V. <S> This means the loss is quite high as it is the diode voltage drop multiplied by the current running through the diode. <S> In many applications this loss is unacceptable (and leads to diode heating) which means a MOSFET is better as it is very low loss (10s of mΩ or less). <A> An on-state FET can be modelled as a resistor, ideally a small one (a few mΩ is typical of power devices), while an on-state diode is more like a voltage source. <S> For instance, say your device draws 100 mA from the supply. <S> A diode with 0.7 V forward voltage at this current would waste 70 mW of power. <S> A FET with a 10 mΩ on-state resistance would waste 0.1 mW. Worth noting, also, is that this is reverse polarity protection, not ESD protection. <S> ESD protection involves different topologies entirely.
The purpose of the diode and MOSFET are for reverse polarity protection, not ESD. The main reason is just to obtain a lower voltage drop and thus lower power loss in the device.
Reversed polarity of CR2025 My bike light uses two CR2032 batteries, which I replaced with CR2025 and some padding. It stopped giving light, so I removed the batteries and measured voltage. One battery gave 2.8 V, and another -1 V. I checked with new batteries that the polarity was right and confirm that the battery had negative voltage. I replaced with a new CR2025 and the previous positive battery and the light worked again. The light was working before, so the "negative" battery had to be around 1.5V, and I did not notice it being backward inside the light when I removed it. How can a battery reverse its voltage like this? <Q> If the two cells are in series but one has significantly less capacity that the other, it can discharge first. <S> The other cell will then discharge the expired cell even below zero as you have seen. <S> If cells are in series you should ensure they are in the same condition, do not pair a new cell with a partly used one. <S> This same effect can occur with rechargeable cells and in that case the cell that reverses polarity can be damaged permanently. <S> You should always be careful about over discharging a series battery. <A> Since the cells are in series, they both have the same discharge current. <S> When the cells are perfectly matched they both have an identical discharge curve. <S> If the cells are not identical, one will discharge completely before the other cell. <S> Consider the following basic simulation, where two capacitors with dissimilar capacitance are discharged in series. <A> How can a battery reverse its voltage like this? <S> The polarity of the batteries is not different, and they have the same voltage, if the battery was at a negative voltage, it was probably damaged. <S> The difference between batteries is only size and capacity, but not voltage or polarity: <S> The 2025 battery is 2.5mm deep and holds a capacity of 160-165 mAh The 2032 battery is 3.2mm deep and holds a capacity of 225 mAh Source: https://www.abcdiamond.com/what-is-the-difference-between-cr2025-and-cr2032-batteries/
Since the current through the cells is identical one cell can end-up reverse charging the depleted cell. This is unusual (assuming that the batteries were measured in the same way), and outside of normal battery operation.
AC to DC rails sag excessively when loaded I'm just experimenting with rectifying a 120v to 12v dual winding transformer and smoothing it out two a dual rail power supply while I work towards attempting to build a clean power supply. I realized the 12v outputs were too low for my intended purpose, so I decided to just test it out with what I had on hand while I wait for a more appropriate transformer to come and other parts. I was trying to somewhat emulate the PSU circuit from here: https://www.diyaudio.com/forums/pass-labs/317803-whammy-pass-diy-headphone-amp-guide.html This is what I made with the parts on hand: With no load, the positive rail measures at ~+11.95v after the regulator. Okay, sounds good. The problem is, when I put any load on it more than a few mA, it immediately sags down to a measured ~+10.8v and varies a lot around that area. With just an LED and resistor set for 10mA current on the LED, the rail held as if nothing changed. But anything beyond that and it immediately drops out of regulation. The pre-regulator positive rail measures well over 14v (I believe almost 15v) and the regulator has a 2v dropout voltage, so I thought it'd be okay. What am I doing wrong or not understanding about this? I know to seasoned EE's (I'm just a hobbyist), the capacitor and resistor values probably seem random and not ideal (it was what I had on hand), so please note that I do know the values are probably not ideal. But, I thought it'd be plenty of smoothing through the CRCRC and work similar to the referenced circuit. Thanks! <Q> The resistors between your filter capacitors are counterproductive. <S> They cause extra voltage drop due to load current, and prevent the later capacitors from charging to peak rectified voltage. <S> Here's what happens to the unregulated output when a load of 100mA is applied (simulated in LTspice ) <S> :- Voltage is very smooth but has dropped to 14V, which is right on the dropout voltage of a 7812. <S> Your regulators remove the ripple, so the filter capacitors just have keep the unregulated voltage above the dropout voltage of the regulators. <S> The circuit will work better if you simply join all the filter capacitors together to make one larger capacitor. <S> Without resistors and the 2200uF capacitors paralleled to make 6600uF the unregulated output looks like this:- <S> Now the voltage is staying above 16V so the regulator has plenty of headroom. <S> There is a small amount of ripple, but the regulator will remove it. <S> The pre-regulator positive rail measures well over 14v <S> (I believe almost 15v) and the regulator has a 2v dropout voltage, <S> so I thought it'd be okay. <S> If the transformer is really putting out 12VAC <S> then you should be getting over 16V with no load. <S> Under load it will drop, but should still be good for up to ~100mA. <S> Transformers usually put out higher than rated voltage when unloaded to ensure that they meet their spec at rated load, so in practice it should be even higher. <S> Lower than expected unloaded voltage suggests either incorrect transformer winding or low mains voltage. <S> If the unregulated voltage is over 14V under load <S> then the regulator should have enough to work on (at least at currents well below 1A). <S> A meter only reads average voltage so it won't tell you the lower ripple 'trough' voltage, but your circuit should have very low ripple if the capacitors are anywhere near the correct values. <S> So if the loaded regulator input voltage is almost 15V <S> then something else is causing the low output voltage. <A> What am I doing wrong or not understanding about this? <S> You're using too low a voltage of transformer. <S> If the transformer ratings are typical, then each output winding is 12VRMS when the AC input is 120V. <S> The output is pulsed DC that does go to around 16V when it is unloaded, but the capacitors have to hold up the voltage between pulses. <S> Even keeping low dropout regulators happy for a 12V output will require very short current pulses from the transformer. <S> You probably want a transformer rated for 15 or 16V per output winding <S> (there's a rule of thumb out there somewhere, but I've forgotten it). <S> You're using high dropout regulators. <S> This just exacerbates your problems. <S> You're using filter resistors. <S> Unless you need a super-quiet supply, ditch the resistors. <S> If you do need a super-quiet supply, it's probably just for a portion of the circuit -- so tap off of the rectifier output (or use separate rectifiers) for a separate, super-small super-quiet supply. <A> I assume your transformer outputs 12 V a.c. <S> This means that after rectification you will get 12 multiplied by square root of 2 minus diode voltage drop. <S> An approximate of 15 V assuming about 1 V drop per diode. <S> This is the peak voltage and in theory it is enough to supply the linear 7812/7912 linear regulators and allow them to function properly (3 volts above output). <S> When a load is inserted in the circuit, the condition to have the input voltage at least 2 V higher than the output voltage of the regulator all of the time is no longer met. <S> Two reasons why this happens: filter capacitors discharge faster; voltage drop across 10 ohm resistors increases; <S> Solutions: <S> use a transformer with higher output voltage and/or use low voltage dropout regulators <S> remove those 10 ohm resistors (or use transformer whose output voltage can compensate the drop across resistors).
This is great for smoothing out the AC, but terrible for causing voltage drop -- and you're causing voltage drop on top of the already inadequate AC voltage.
Driving relay directly from optocoupler, what is best? I'm designing a smart socket using an ESP-32 and a mechanical relay that can drive 5A at 240VAC. Following typical relay driving circuits, I would like to use an optocoupler between the MCU and relay. The relay that I'm using is a Panasonic APAN3105 that operate at a low coil power of 110mW(5V@22mA) based on the datasheet. Typical relay current driver circuit use an optocoupler connected to a transistor which then drives the relay. Since I'm using low coil power relay, can I drive the relay directly from the optocoupler that is within the max collector current? Are there any downsides to this method? simulate this circuit – Schematic created using CircuitLab I have not selected the optocoupler yet but have narrowed down to a few which have generally the same properties (If = 20mA, Ic = 50mA)( EL817S(B)(TU)-F , PC817X2CSP9F , TLP785(GB-TP6,F(C ) What is the best way to drive a relay? <Q> Are there any downsides to this method? <S> For most Relay/optoisolator combinations, this won't work because many relays need a current that is larger than the 50mA sinking current provided by most optoisolators. <S> Using the optoisolator to drive another NPN works well without much addtional cost. <S> Source: https://howtomechatronics.com/tutorials/arduino/control-high-voltage-devices-arduino-relay-tutorial/ <A> As well as maximum Collector current you also have to consider the Current Transfer Ratio (CTR), maximum LED current, and output drive capability of your MCU. <S> Fortunately the ESP-32 has a hefty 40mA output at maximum drive strength, which might be needed because standard optocouplers typically have a minimum CTR of only 50%. <S> To ensure low Collector-Emitter saturation voltage the optocoupler <S> LED current should also be much higher than (eg. <S> double) <S> I C / CTR. <S> The PC817XN for example has 50% minimum CTR and 50mA <S> Absolute Maximium LED current. <S> That leaves no room for increasing drive current to keep the transistor in saturation. <S> As CTR has a wide tolerance of 50-600% you might get away with it on a prototype, but not in production. <S> For reliability you probably want to keep the LED current below 25mA, and then you want a minimum CTR of about 200%. <S> Therefore you should use the PC817X3 (rank mark C) or PC817X4 (rank mark D). <A> In general, you want to make sure that you can get enough current through the LED from the microcontroller (taking into account the CTR of the coupler) <S> but I expect you are well within that limit. <S> Also, make sure that the maximum emitter voltage is greater than the coil voltage, but at 5V that should also be pretty easy. <A> You are looking a PUSH-PULL driver but some time isn't very cheap. <S> Ways i know (i tested) : <S> FOD8342 work without any other component(work on 3MHz output). <S> TC4421/4422 with an cheap optocoupler(about 1.5 MHz). <S> UDN2981 <S> need trigger resistors and cheap optocoupler(a few KHz) <S> A tip : Never use an uC (microcontroller without isolation(power source + IO systems)) <S> I hope it helps someone, best regards!
Yes, as long as the relay coil current is less than the maximum collector current of the optocoupler then this should work.
Traces in connector's solder keep out area I'm designing with a Molex 87438-0443 connector whose PCB footprint includes a 'solder keep out area' between each pair of copper pads for each connector lead. It appears that the manufacturer wanted a discontinuity between the connector's signal side (i.e. the part of the pins jutting out) and its mechanical side (i.e. the bottom-side part of the pins) though I don't have a solid guess as to why. Is there any harm in running a ~10 mil, solder-masked trace connecting these two pads? <Q> The note says of the gap "TO PREVENT SOLDER LIFTING" not entirely clear, but I assume that means that one end can rise up of there's a lot of solder below it. <S> It's clearer in the 3D model <S> that there's a matching indent in the contact, so that there isn't a continuous gap for the solder to wick along. <S> The drawing notes that the resist should be level with the pad (by making the window in the resist slightly larger than the pad), so putting a trace between the pads would raise the resist above the pad. <S> If it lined up with the indent in the contact, that wouldn't matter, but it may not align perfectly, and hold the contact up from the pad. <A> There are a few reasons that the manufacturer could have in mind for a no copper zone: <S> A copper trace could change the way the solder flows during a reflow and cause different heating, this may cause some of the pins to be cooler or hotter than desired. <S> The trace is already connected through the connector conductor, so a trace between is redundant. <S> The trace could weaken the board if great force is applied to the connector. <S> Not sure if these are right, but just my experience with connectors. <S> I would follow the manufacturers recommendations, because they've built and tested the best ways to apply their connector to a PCB <A> I am fairly certain it is to relieve stress on the pin since it's a long pin and gets worked pretty hard when the connector is being plugged or unplugged or when the PCB is flexed. <S> The simple way is just to say that it lets the pin flex in the middle independently of the PCB. <S> But if you study the way adhesives are peeled or sheared, you will find that when being peeled or sheared, the only part of the adhesives that is acting to hold the object down is the adhesive that is right at the edges. <S> Peeling strains the adhesive right at the peeling edge, but not the adhesive behind it. <S> Shearing strains the adhesive on all edges that run perpendicular to the direction of the shearing force. <S> That means that having "more edge", whether in the form of a wider edge or multiple edges helps withstand shearing and peeling better). <S> I've never quite seen the case of multiple edges behind each other like in this connector, but I would think it would also apply.
The solder paste during reflow will run up the sides of the contacts, and will tend to pull the joint together, but if there were a continuous pad under the full length of the contact, there's a risk that the solder can accumulate at one end of the contact, from mechanical imbalance in the connector, air currents in the oven, initial placement etc., and not produce a uniform joint. It's to ensure that the connector is pulled down flat to the board.
How to detect IR pulse using phototransistor and voltage comparator (subtracting constant light) I want to detect an IR pulse with a phototransistor. I did the following schematic, using the voltage comparator LM339: The problem is that this will work if in the dark, but if there is some light, the OUT LED will be ON anyway. What I want is a way to automatically subtract the "constant" ambient light, and have the OUT LED ON just during a pulse of light to the phototransistor. Thanks a lot in advance for your help. <Q> The secret of successful IR transmissions in ambient light conditions, with it's combined many mixed frequencies, is modulation. <S> The transmit source produces an on/off carrier frequency and then "information" is introduced on this carrier by switching the carrier itself on and off at some lower frequency with a recognizable pattern. <S> The receiver uses a band pass filter tuned to the transmitter carrier frequency that is then able to recognize the "information" modulation and output a waveform that represents that information. <S> The whole concept is very similar to how a radio channel can work despite the fact that the ether is full of thousands of radio signals. <S> The IR transmission scheme I describe is exactly the same as common TV remotes work. <S> Check out something like a TSOP1738 and other similar devices. <A> This design idea takes advantage of LM339 bias current to generate a DC offset via R1. <S> Have never tried this idea before... <S> its a design-by-specsheet. <S> R1=470k gives an offset voltage of about 10mV. <S> This offset is about twice the inherent +/- <S> 5mV offset of the comparator itself. <S> That's not much noise immunity, but may be sufficient. <S> You can increase offset by raising R1 value - sensitivity to light pulses will be poorer. <S> This circuit may be prone to oscillation with a poor layout (I'd expect trouble on a breadboard). <S> C1 can be changed to accept pulses of different durations. <S> C1=500pf is about appropriate for IR pulses of 50 microseconds. <S> A really sensitive phototransistor may saturate in daylight and not detect light pulses... <S> if so, R4 must be reduced in value. <S> Note that comparator output idles at logic "high", and pulses to logic "low" when the photo-transistor sees a pulse. <S> simulate this circuit – <S> Schematic created using CircuitLab Edit: <S> Another simpler version. <S> R1 again creates about 10mV offset by means of LM339 input current. <S> C1 (below) creates an average reference voltage proportional to ambient light hitting the phototransistor. <S> An RC filter (C2,R5) is added to the DC supply voltage in an effort to keep supply noise from triggering an output. <S> A pulse of light pulls the comparator output down from +5V to ground. <S> The RC time-constant (R1*C1) determines sensitivity to ambient light changes. <S> Decrease R4 to decrease sensitivity to light pulses <S> : simulate this circuit <A> AC couple the phototransistor with a series cap. <S> This won't work for DC (aka long IR pulse), obviously. <S> Or phototransisor anti-parallel <S> to the current one pointed away from the IR. <S> Or use it on the comparator inverting input so the reference voltage rises with ambient light. <S> It needs to match and track the first phototransistor though which may not be workable. <S> You could also get use a visible blind phototransistor instead. <S> It will look opaque black. <A> try this <S> simulate this circuit – <S> Schematic created using CircuitLab
You could use an old TV remote as your IR source and then get an IR receiver module that has all the receiver logic built into a single chip. The LM339 is a comparator that has a somewhat unique characteristic: it uses a PNP input stage that has nearly constant 25nA bias current.
Batteries in series discharging unevenly Hi I am quite new to electronics, I had a project powered by 3xAA batteries in series (~4.5 V) with an LDO dropping it to 3.3 V.It went flat quicker than expected but what was very surprising to me was that the voltage of the three batteries was so different ~0.5 V, ~0.6 V and ~1.5 V total ~2.6 V. Is it normal for a battery pack to discharge so unevenly?Can someone explain why? If it is relevant it sat most its time in the μA range with hourly very short spikes to ~150 mA. <Q> Assuming alkaline / zinc chloride / similar chemistry: If you got eg 0.5, 0.6, 1.1 then it could be explicable by the fact that there is very little energy in the area under about 1V, so that if one cell had say 5% more energy content than the other two it would still be on the "final approach" to fully flat while the other two were essentially completely exhausted. <S> However, a cell at 1.5V has the majority of energy capacity remaining - probably 90%+ . <S> For one cell to be at 1.5V while the others are fully exhausted then they would have had only 5% - 10% of their new energy content at the start of discharge. <S> SO this is not a batch variation - <S> two of the batteries were very close to dead at the start of discharge OR <S> something else has happened not mentioned in your question. <S> If the batteries are 'alkaline' they would retain a substantial portion of full charge for many years. <S> If they were Zinc Chloride or other similar chemistries then they may well have very little capacity after say 2 years of shelf life. <A> You say you have a LDO but don't give it's number or the amount of quiescent current. <S> Many/most beginners don't realize that voltage regulators continue to draw current even when the load doesn't require any. <S> This is the most likely cause of the rapid discharge of the batteries. <A> You should use 3 cells from the same manufacturer, same model and same chemistry, out of the same batch or the same package. <S> Of course the cells should not been used before. <S> If all cells have the same history and manufaction tolerances of capacity are small, such an uneven discharge should not happen. <S> But when the history of those 3 cells was different, uneven discharge is possible.
The different voltages on the batteries is a little unusual but not really important.
Non-inverting amplifier with negative supply rail simulate this circuit – Schematic created using CircuitLab The formula Vout = Vin * (1 + R2 / R1) is not valid for this case? because for example I'm feeding 2.567V to non-inverting input and getting 2.976V at the output! I'm certainly doing something wrong somewhere... How to calculate the output of non-inverting amplifier with negative supply rail? Edit: What I want from this configuration is to provide -1.25V to LM338 linear regulator to get 0V at its output, James mentioned in his answer to remove R3 from the first schematic and I did that with minor gain modification: simulate this circuit At 1V the output is 0V and when non-inverting input is 0V the output of opamp is at 1/4 of Vref 1.250V. What's this configuration name? is it an inverting configuration? What's the equation to calculate output? <Q> Remove R3 and it'll work as you require. <S> Edit <S> When input is 0V, the output is -1.25V. <S> Perhaps missing the minus sign off was just a typo. <S> Also assuming Vref is still +5V and that's a dash not a minus sign. <S> That is a non-inverting amplifier, the input voltage goes into the non-inverting input. <S> If the input is +ve with respect to Vref then the output is +ve with respect to Vref. <S> If the input is -ve with respect to Vref then the output is -ve with respect to Vref. <S> It's a more rarely seen non-inverting amplifier because Vref isn't at 0V. <S> You'd probably have trouble finding a 4k resistor. <S> I'd recommend R1=12k and R2=3k to get your 4X ratio. <S> (E24 series). <S> Equation is:- <S> Vout = <S> (Vin - Vref)(R2/R1) <S> + Vin where in your case Vref = <S> +5V <S> You seem to perhaps be designing a variable voltage power supply, are you using a PIC to control it? <A> Since this looks like a good homework problem I will only give you a general method. <S> Since it is variable in this case let's call it \$V_S\$ . <S> Second, you need to determine how much current flows from R1 and R3 toward the inverting input. <S> The inverting input must be at the same voltage as the non-inverting input, which is \$V_S\$ . <S> So you have a simple circuit problem, with the 5V source, R1, R3, and a virtual short to \$V_S\$ . <S> Analyze the circuit to find the current flowing toward the virtual short to \$V_S\$ , expressed as a function of <S> \$V_S\$ and the values of R1 and R3. <S> Third, you know that the current flowing toward the virtual short does not actually flow into the inverting input... <S> there is no current into the inverting input for an ideal op amp. <S> So, that same exact current must flow to the output through R2. <S> Set up a simple KVL problem with the virtual source of \$V_S\$ at the inverting input, R2, and \$V_{OUT}\$ . <S> The only unknown here is \$V_{OUT}\$ , so you can use a little algebra to find the formula for \$V_{OUT}\$ as a function of <S> \$V_S\$ and the resistor values. <A> Superposition is your friend here. <S> Instead of trying to analyze the circuit all at once (though that certainly is possible), break it into 2 familiar problems: 1) <S> A non-inverting amplifier with <S> \$G = 1 <S> + \frac{R_2}{R_1||R_3}\$ 2) <S> An inverting amplifier with <S> \$G = <S> -\frac{R_2}{R_1}\$ <S> Then the total output is the sum of the individual outputs. <S> The negative supply rail has nothing directly to do with the output, except that a negative rail is required to get a negative output. <S> However, if all you're trying to do is get -1.25V at the output, just remove R3, ground the non-inverting input, and apply 1.25V to the inverting input.
First, you need to know the voltage at the non-inverting input.
What advantages do the absolute encoders gain by employing Gray code transmission instead of binary code? I have read about the Gray code , but in practice I don't get what is the advantage of it over the binary code. Some absolute encoder manufacturers offer both types so one has to decide before buying one. I for example need to measure the rotation angle for a very slowly rotating rod and many tomes fixed. In what circumstances or applications gray code output has advantage? <Q> Imagine the encoder is a 12-bit encoder sitting right at the mid-scale transition between 0x7FF and 0x800. <S> If the inner workings of the encoder consist of something like a code wheel with 12 independent photodiodes, all 12 bits would have to change at once, for a negligible movement. <S> Since there are mechanical tolerances , some of the bits would change before others and there would be a lot of confusion at that particular angle. <S> The same problem exists at every other position where more than one bit changes at a time. <S> Using a Gray code for the code wheel completely eliminates that problem provided that the tolerances don't exceed a fraction of an LSB since only one bit changes at a time, and worst case your negligible motion results in a change equal to the resolution of the encoder. <S> Modern high resolution encoders can use other methods (such as a very fast camera reading a coded strip) and take care of sampling properly so you always get a result that makes sense. <A> In Gray code, the transition between two adjacent values only changes a single bit. <S> This is a huge advantage in any sort of mechanical or optical encoder, because it's virtually impossible that you can make two or more bits change state at exactly the same time under all circumstances. <S> This becomes even more important if you're going to sample <S> the data by, say, capturing it in a register. <S> There's some chance that the data will be changing at the same time that you clock the register, leading to metastability in the bit(s) that are changing. <S> It's easy to translate from Gray to binary, and if an encoder offers a binary output, most likely they're simply doing the translation for you. <A> You've mentioned that you see the value fluctuating by 1. <S> Let's assume for now that this is due to physical limitations in the measurement. <S> For a binary code, you could get unlucky in some situations. <S> Say you are stopped almost exactly on the transition between values 15 and 16 (in binary, 01111 and 10000). <S> So it is switching between the two values. <S> However, the bits cannot all switch at exactly the same time, for a number of reasons (mechanical/optical/electrical). <S> The time that they switch can be made to be very close, but sometimes the next circuit will get a value with only some of the bits from each value. <S> This is like randomly choosing a bit value for each bit . <S> For example, it could read 01001, or a value of 9. <S> This is not even close to the desired values of 15 or 16. <S> On the other hand, using a Gray code, only a single bit will change between 15 and 16. <S> I don't know offhand what they would be, but for the sake of example take the two Gray coded values to be 01011 (15) and 11011 (16). <S> Now, for each bit, randomly choose between the two options, and you will see that the only possibilities are the two desired values. <A> As mentioned here above, gray codes only allow for a single bit (which may or not be the LSb) to change at a time. <S> This prevents value glitches with multiple bits changing at the same time; especially with combinatorial logic. <S> This was really important many years ago when I started designing motion control and sensing systems that were faster than the CPUs we had available. <S> Much of the algorithms were implement in hardware with discrete logic and, if we were lucky, PLAs (PALs, GALs, primitive FPGAs or FPLAs). <S> Another advantage is that crosstalk and line noise are greatly reduced. <S> There will always be the potential for jitter to occur. <S> If an encoder is stopped right on the edge of a transition between two stable points, a single bit may well bounce between a 0 and a 1. <S> The cause can be something as simple as vibration from a running motor performing a function that has nothing to do with the encoder. <S> The higher the resolution of the encoder, the more this is likely to happen. <S> It's just the nature of the beast. <S> A good example would be a stepper motor with 200 steps per revolution using a 10-bit encoder. <S> Motor steps and encoder steps will never, ever line up exactly.
In Gray code, since only one bit is changing at a time, the ambiguity is between two adjacent values and the absolute error is limited to ±1 count.
Producers consumers balance in the grid In a large distribution grid, it is said that consumers and producers must be in balance ; what that means physically, in correct (not simplistic) mathematical formulation, is not clear: perfect balance doesn't exist in nature and I'm fighting to understand how the out of balance system either stores and retrieves energy, and at what time scale the balancing done . What are the short term energy flows? When I turn on a light switch, power flows in the instant. Where is it taken from? How much energy is present in the distribution network itself at any given time? Does it fluctuate? Is there a good yet accessible description of the elasticity of the power system? Does it vibrate? <Q> Grid frequency is where the magic hides.... <S> There is energy storage in the inertia of all that spinning steel, and more the other side of those throttle valves in the PE of hot water trying to be steam. <S> The grid frequency is really the integral of the difference between generation and load divided by the total mass moment of inertia in the system. <S> $$\omega=\int{(Generation - Demand) dt}/k$$ <S> You set the base load generators to go to full output if the frequency drops below <S> say 50.5Hz, the mid cost stuff to go throttle up at 50Hz and <S> the peaking plants (Expensive to run) to load up if the frequency drops below say 49.8Hz <S> (There are way more graduations then this). <S> The effect is that the base load runs at full power, the mid cost stuff tracks the demand and the peaking plants idle until the mid cost stuff fails to meet demand at which point they load up. <S> Reactive power flow controls the system voltage and by controlling this you can control the load currents in the transmission network. <S> The dynamics are actually quite interesting especially during fault conditions and there are whole books written on that subject. <A> The grid can be imagined - and in some cases is - a single generator. <S> The generator has a speed governor to maintain the frequency. <S> The governor will have a certain reaction time and that means that should the load suddenly increase that the frequency will drop and if the load suddenly decreases the frequency will suddenly rise. <S> Figure 1. <S> A mechanical governor. <S> The vertical shaft is driven by the engine and the faster it goes the more the weights are thrown outward and upward (against gravity) causing the lever arm to reduce throttle. <S> Source: Centrifugal governor . <S> I worked on one of these on a 1 MVA generator and, with the aid of a reed frequency meter was able to set the frequency of the generator very close to 50 Hz. <S> In a large distribution grid, it is said that consumers and producers must be in balance; Correct. <S> In your basic grid network there is no storage. <S> The generators can only export if there is a load. <S> The generators may be spinning and producing voltage but if there is no load then no current will flow. <S> The energy source (steam, diesel, hydro, etc.) will have to be reduced quickly to prevent the frequency increasing. <S> perfect balance doesn't exist in nature <S> Yes it does. <S> The floor beneath me is providing an upthrust which exactly matches the force of gravity on my body. <S> ... <S> and I'm fighting to understand how the out of balance system either stores and retrieves energy, ... <S> It doesn't. <S> ... <S> and at what time scale the balancing done. <S> That depends on the physical governor. <S> What are the short term energy flows? <S> When I turn on a light switch, power flows in the instant. <S> Where is it taken from? <S> From the generator via the grid. <S> How much energy is present in the distribution network itself at any given time? <S> Does it fluctuate? <S> Is there a good yet accessible description of the elasticity of the power system? <S> Does it vibrate? <A> How does it store energy at electrical speeds? <S> It doesn't need to store an energy reserve if it can shed load. <S> Power is energy/time. <S> So if it can reduce power, that is as good. <S> Fortunately, power is voltage x current. <S> In a nominally constant-voltage system, the customer largely decides current, but the supplier decides voltage . <S> It can shed load by reducing voltage. <S> If voltage sags, generators are able to make proportionately more current, which is what the customer is really drawing. <S> Many customer loads, however, are resistive, or at least, linear. <S> So this provides an insta-shed mechanism. <S> It can shed capacity by increasing voltage Reverse of above. <S> But it is also pushing power farther and fartyer out across the grid, and that consumes power two ways: transmission losses and phase disagreement with faraway generators. <S> Because of the speed of light. <S> Consider two cities 600km apart on the same grid, thqt's 2 milliseconds at the speed of light. <S> That is 36 or 45 degrees on the AC sinewave. <S> So if power abruptly changes direction due to load changes, that is going to cause a lot of wire heating.
Energy flow is determined by the load.
Linear regulator doesn't keep the right voltage when connected to ESP8266 I am trying to use a MCP1700-3302E LDO to regulate voltage from single-cell LiPo battery to my ESP8266 (Wemos D1 package). I intend to use the LiPo down to 3.5V, and since ESP8266 requires 3.3V, that leaves 0.2V headroom. MCP1700 has a dropout voltage right under that requirement (178mV @ 250mA). For testing, I hooked it all up on a breadboard, powering from my PSU to monitor voltage and current. This is the exact schematic of how it is all hooked up: As you can see, I added two 1uF (105) ceramic capacitors to the input and output of the regulator, just like the datasheet suggests. Also, I use two 470uF capacitors (because I don't have larger one at the moment) to handle the current spike during ESP8266 booting. ESP8266 might spike up to 435mA, but the MCP1700 has a current limiter of 250mA, so without these capacitors ESP won't boot. After it boots, it runs a simple onboard LED blink sketch. Now, the problem is that after booting, the voltage on ESP8266 3V3 pin drops to 3.1V. PSU provides 3.5V, I double checked - no drop there. And ESP8266 consumes around 70mA with this sketch, which is way below the 250mA limit of MCP1700. According to the datasheet, dropout voltage at 70mA draw should be around 45mV, but in reality is more like 400mV (3.5V before LDO, 3.1V after LDO). I know ESP8266 can function with slightly lower voltage, but I need stable 3.3V supply for it because I'll be making some analog measurements, and ESP needs a stable reference voltage for that. I cannot figure out why this is happening. I am using proper, self-made jumper cables, not the cheap stuff from China. And I am measuring voltage directly on the MCP1700 legs (like shown in the schematic), so the breadboard shouldn't be at fault either. I tried replacing all the components, including the regulator and the ESP8266 (I have plenty of both), but all of them show the same results. If I increase supply voltage on my PSU to 3.7, then I get correct 3.3V after LDO, but the whole point of this setup is to use a voltage as low as 3.5V, and according to datasheet, this regulator should be able to provide that easily with such small current. What am I missing here? <Q> The datasheet states a the minimum Vin must meet 2 conditions, one of them being: \$V_{in} >(V_r + 3\%) + V_{DROPOUT} \$ <S> which for a 3.3V regulator becomes \$V_{in} <S> > <S> (3.3V + 3\%) <S> + V_{DROPOUT} = <S> 3.4V + V_{DROPOUT}\$ <S> So, that leaves 100 mV to be "used" for dropout. <S> You cannot use FIGURE 2-12 and FIGURE 2-13 from the datasheet to determine the dropout voltage because for these graphs the following applies: <S> Note: <S> Unless otherwise indicated: VR = 1.8V, COUT = <S> 1 <S> μF <S> Ceramic (X7R), <S> CIN = 1 <S> μF <S> Ceramic (X7R), IL = <S> 100 μA, <S> TA = +25°C, <S> VIN = VR + 1V . <S> And you do not apply Vin = 3.3V+1.0V to the regulator. <S> Moreover, the values shown in the graphs are typical values. <S> You may happen to have an IC that deviates towards the maximum worst case dropout voltage. <S> (For \$I_L\$ = 200mA, the worst/maximum value differs a factor 2.3 (!!) <S> from the typical value. <S> I cannot find (yet) what the dropout voltage applies to this situation, but think an input voltage of 3.5V doesn't satisfy the condition mentioned as first in this answer. <A> The max dropout is 350mV at 25 degrees C, but that is provided you stay within the 250mA limit, and it gets worse at high junction temperature. <S> Measurements made by others have noted almost 300mA typical peak draw during packet operations. <S> The 1000uF cap only goes so far with that kind of draw. <S> The average current may be only 70mA but that doesn’t help here. <S> Bottom line, your regulator is inadequate, replace it with a 1A type or at least 500mA. <S> Consider turning the radio off during ADC operations <S> (though the built-in ADC in that chip is very iffy accuracy-wise). <S> Edit: Also be sure the 1uF capacitors are very close to the regulator. <S> You cannot reliably use a solderless breadboard in many cases for this kind of circuit. <S> Resistance must be in the 1 \$\Omega\$ range or less and inductance should be minimized. <A> I would take the ESP8266 out of the equation and characterize your LDO with a restive load. <S> Try replacing the ESP8266 with 3 100 Ohm resistors in parallel on the regulated output of the LDO and measure the drop-out voltage. <S> Any single resistor will do as well of course, 100 ohms was chosen only as a convenient junk-box value.
The ESP is probably drawing spikey current that the regulator has trouble supplying.
Approach for receiving unknown length of data I'm using STM32F407 and I would like to receive unknown length of data. My current approach uses a fixed known size to receive the data from UART5. How do I change it to receive unknown length of data. /** * @brief This function handles UART5 global interrupt. */void UART5_IRQHandler(void){ if (!bytesAvailable(&_writeBuffer)) { __HAL_UART_DISABLE_IT(&huart5, UART_IT_TXE); } HAL_UART_IRQHandler(&huart5); /* USER CODE BEGIN UART5_IRQn 1 */ /* USER CODE END UART5_IRQn 1 */}void HAL_UART_TxCpltCallback(UART_HandleTypeDef *huart){ if (huart == &huart5) { tx_finished = 1; HAL_UART_Receive_IT(&huart5, _readBuffer.buffer, expected_replay_length); }}void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart){ if (huart == &huart5) { if (!isFull(&_readBuffer)) { uint16_t counter = 0; while (counter < expected_replay_length) { push_single(&_readBuffer, _readBuffer.buffer[counter++]); } } }} <Q> I'm not a professional, but I guess the only way is to receive 1 byte at a time, and copy it to a ring buffer or other buffer that can store multiple messages (or 1 if you can handle one message fast enough). <S> Than you have two possibilities: <S> If it is easy to find out if the end of a message is received (for example when it ends with a certain value, or if you store the expected number of bytes so you can check against that value), than verify this in the interrupt, and set a Boolean. <S> This Boolean can be checked in the main (non interrupt) code to process the message and clear the message. <S> A ring buffer is ideal for this. <S> If it is not easy to find out the end of a message, than set a Boolean that a new byte has been received, and in the main code verify if a complete message is received, if so, execute it and delete it. <S> Pseudo code for possibility 1: Global volatile uint8_t ringBuffer[MAX_BUFFFER]; volatile uint8_t … // <S> Additional variables to keep track of ring buffer spacevolatile bool uartMessageCompleted = false; <S> Initially: HAL_UART_Receive_IT(1 byte) <S> Callback: <S> void HAL_UART_RxCpltCallback(UART_HandleTypeDef *huart){ Store byte in ring buffer HAL_UART_Receive_IT(1 byte) if (isCompleteUartMessageReceived <S> ()) { uartMessageCompleted = true; }}bool isCompleteUartMessageReceived{ return true if a complete message is received} In Main (or a function called from main): <S> void main(){ … if (uartMessageCompleted ) { excecuteUartMessage(); <S> // Implement yourself <S> remove message from ring buffer } …} <A> A simple configuration would set the DMA to detect the Maximum possible UART message length you expect to handle, which would trigger the UART Rx Complete Callback. <S> In addition you would enable the UART IDLE interrupt, and when that interrupt is triggered, you would force the same transfer complete callback (this is achieved on some STM32's by disabling the associated DMA Stream) <S> but this time checking the DMA's NDTR (Number of Data Register) to read the received number of bytes in the UART Rx Complete Callback. <S> / <S> ** * <S> \brief Global interrupt handler for USART2 <S> */void USART2_IRQHandler(void) { <S> / <S> * Check for IDLE flag <S> */ if (USART2->SR & USART_FLAG_IDLE) { <S> /* <S> We want IDLE flag <S> only */ <S> /* <S> This part is important */ <S> / <S> * Clear IDLE flag by reading status register first <S> */ <S> / <S> * And follow by reading data register <S> */ <S> volatile uint32_t tmp; <S> /* Must be volatile to prevent optimizations */ <S> tmp = USART2->SR; <S> / <S> * Read status register <S> */ <S> tmp = USART2->DR; <S> / <S> * Read data register <S> */ <S> (void)tmp; <S> /* Prevent compiler warnings */ <S> DMA1_Stream5->CR &= ~DMA_SxCR_EN; / <S> * Disabling DMA will force transfer complete interrupt if enabled */ }} <S> This blog post has a more detailed example and explanation. <S> The post by ST detailing the implementation seems to have been lost during their migration but try this link and click on the attachment to see a code example. <S> We can use very useful feature in UART peripheral, called IDLE line detection. <S> Idle line is detected on RX line when there is no received byte for more than 1 byte time length. <S> So, if we receive 10 bytes one after another (no delay), IDLE line is detected after 11th bytes should be received <S> but it is not. <S> We are able to force DMA to call transfer complete interrupt when we disable DMA stream by hand, thus disabling enable bit in stream control register. <S> In this case DMA will make an interrupt if they are enabled and we can read number of bytes we need to still receive by reading NDTR register in DMA stream. <S> From here, we can calculate how many elements we already received. <A> First of all, in embedded systems there is always a maximum data size -- the size you preallocate in the MCU RAM. <S> (Unless you process the data gradually, in other words use streaming data. <S> This is more advanced.) <S> The normal case can still be that the data doesn't fill all available space. <S> One way is to stop receiving data based on what has been received so far, for example looking for a \r\n pattern that indicates a newline. <S> To do this, you must examine each byte when it has arrived. <S> The other way is to do a inter-byte timeout. <S> Set a hardware or software timer after each received byte. <S> If a new byte arrives, set the timer again to postpone it. <S> When the timer triggers, reception is done. <S> In both cases, you also need to handle the case that your buffer fills up. <S> The action depends on your application; consider reception complete, overwrite the received data, throw away all data until a newline is received or trigger some error handling.
The best way, and the way recommended by ST in a blog post on their old forum, is to use IDLE line detection linked to the DMA controller.
How to secure cables to a PCB What is a good way to secure cables to a PCB? There are some good advice in the following post, but the application is slightly different: Securing electrical cables to holes in enclosures? In this case, the strain-relief/securing of the cables will be onto the PCB, not to a casing. (The PCB will be encapsulated or just given a layer of conformal coating, so there isn't any casing.) I'm considering to run the cables to the edge of the board. Then at the edge, press the cables between the PCB and a metal bar. The metal bar is pressed against the PCB with bolts and nuts (nylock or thread lock). What might be the problems of this solution and what other methods should I consider? <Q> Zip ties are a good way to provide strain relief for cables attached to a PCB. <S> You lay out a path for the cable on the PCB, and put adequately large holes on either side of this path for the zip tie. <S> I've done this many times on custom assemblies in which using a connector would have been counterproductive. <S> Here's one example. <S> It's a board that functions as a "hub" among several different devices in a cramped aerial photography pod. <S> Each device requires a USB connection, a power connection and/or a control connection. <S> The wiring for one of the devices is shown installed. <S> Adjacent zip ties share the same hole. <A> Cable ties are great. <S> Just put two through holes in the PCB away from the connector. <S> There are also a variety of flat tie holders that can also facilitate zip ties. <S> These can be pushed in or fastened with fasteners. <S> Snap-in cable clamps (many different varieties) can be used in PCBs with one through hole. <S> Source: <S> Flat Cable Clamp - Snap-In, Low Profile (Essentra Components) <S> Source: <S> Cable Clamps <S> (Essentra Components) <A> One very simple technique used on spaceflight PCBs (and thus good against vibration) for a small number of cables is to insert the wire through one hole, and solder the stripped end at a distance from that hole corresponding to the minimum bending diameter of the wire (which may be assumed 5 times the outer diameter if you do not know it). <S> A spot of glue (epoxy or hot glue if not available) is then added to relief the solder from mechanical stress. <S> The minimum bending diameter is not only chosen to reduce the footprint on the PCB, but also to ensure the wire has some curvature inside the glue: if the wire is straight, since wires are often insulated with PTFE which has very low adhesion to most glues, the wires would have very low resistance to pulling. <S> Note that for larger number of wires, we also use T-wraps (at least the shape of it, most plastics release too much gas in vacuum) such as presented by Dave Tweed in addition to custom clamping parts <S> : it yields very good resistance.
Stick-on cable clamps also work good on most surfaces:
What happens when the voltage applied to fully charged capacitor becomes lower than initial voltage? I have 24 volt capacitor and I charged it fully using 24 volt power supply. What happens if the power supply voltage becomes 20 volt which is connected to the capacitor that is fully charged at 24 volt. Car's cigarette lighter socket is used as source power supply. I am trying to use capacitors to provide more current at peak times of load. Power supply voltage can be changing from 12.6 volt to 14.4 volt. <Q> If the supply voltage is changed quickly enough, the the capacitor starts sourcing voltage, the current flows backwards into the supply. <S> Bypass capacitors are used to regulate voltage, but mostly for short term voltage drops from cables or trace inductance. <S> The capacitor can supply voltages to the load in the event the voltage drops. <A> If the source voltage (the car battery) becomes lower than the capacitor's voltage then the capacitor will try to charge the capacitor. <S> It's important to note that the magnitude of the current, and therefore the time taken to equalize the voltages, depends on the resistance of the wiring between the battery and the capacitor and the value of the capacitance. <A> As they are in parallel, ideally the capacitor would follow exactly the voltage that is applied to it. <S> If the battery voltage changes immediately, the voltage drop between the capacitor and the battery will generate a current with the value: $$I=\frac{V}{R}$$ <S> (being V the voltage drop and R the resistance of the cable that connects the capacitor to the battery). <S> But, in the real world, the formula for the current on a voltage change on a capacitor over time is actually $$i(t)=\frac{du(t)}{dt}\times <S> C$$ (being u(t) <S> the voltage drop over time and C <S> the capacitance you are using). <S> In this formula, you can see that the current will be higher if the derivative of the voltage drop is higher. <S> What this tells us is that the current will be proportional to the voltage drop change rate (quicker change means higher current and viceversa). <S> This current will discharge the capacitor and decrease its value over time since the capacitor voltage will be decreasing as well. <S> When the capacitor voltage is the same as the battery, the current will be 0 so it will stop discharging.
Current will flow from the capacitor to the battery until their voltages are once again equal.
Meaning of "Degrees of Freedom" in control system? I have been searching google for "degrees of freedom," but it shows results relevant to statistics and physics. I am interested in answers in the context of electrical engineering, especially control systems. <Q> In principle there are the following obvious 6 degrees of freedom: <S> movement along <S> x axis movement along y axis <S> movement along z axis rotation around <S> x axis <S> rotation around y axis <S> rotation around z axis <S> So in principle there can be 0 to 6 degrees of freedom (0 is a bit useless). <S> See also Jonk's comments below about non electronic degrees of freedom, and other answers (mention more than the 'obvious' ones above). <A> As Michel says, there are 6 degrees of freedom for an object in free space. <S> This results in robotic arms having up to 6 joints, to give all 6 degrees of freedom. <S> But do you need all 6 all the time? <S> Many machines out there have 3, for X, Y, Z, and then a fourth for a spindle head, such as a milling machine. <S> In that application 4 would be enough (while the spindle may only be only on or off, it is still controlled). <S> Then again there are machine such as lathes which would have the lathe spindle, an X axis for position of the tool along the axis, and a Y for going into the item on the lathe. <S> So that would make a good machine with only 3 degrees of freedom. <S> However, in a theoretical machine, there may be reason for more DoF than 6. <S> If you look at the robotic arms used on some more sensitive, restrictive (you could say exciting) environments, the can be multiple ways to get to a fixed orientation and location in space. <S> For instance, some robotic arms used to handle dangerous materials <S> have multiple "elbow" joints, I recall, but can't find it now, some reference to a set up with multiple arms each with 8 degrees of freedom to allow them to reach around obstructions. <S> You can also talk about getting to a fixed point in space, and then attempting to do something at that point. <S> For instance you orientate a set of jaws to an orientation at a location (using 6 DoF), but the action of closing the jaws would be a 7th DoF. <S> Degrees of freedom also get complicated when talking about a walking robot. <S> For instance, this video talks about a 10 degrees of freedom machine. <S> As there are 10 joints to worry about. <S> The way I think about degrees of freedom in a robotic application is how many axis do you control. <S> More axis give more freedom and more cost. <A> "Degrees of freedom" is also used in many contexts to indicate how many independent measurements are available. <S> For example, in a navigation system, you might be able to measure acceleration along three independent axes rotation rate on three independent axes magnetic field magnitude along three independent axes barometric pressure <S> This would be referred to as a "10-DOF" measurement system.
Degrees of freedom (in an electrical context) is related to a motor which can move and rotate in different directions.
How to demonstrate the effect of baudrate in serial communication Regarding baudrate in serial communication, the only requirement is that both devices operate at the same rate. The common baud rates are 1200, 2400, 4800, 9600, 19200, 38400, 57600, and 115200 ect. And especially what I see is that 9600 is the most popular. On the other hand I also see in some C codes for microcontrollers they also for instance use much higher baudrates such as 115200. Now all the time in my encounter whenever I reduce the baudrates to 9600 or even more the system still works. Last time I think I tried with an encoder and changing the baudrates did not change any functioning in practice. I simply lowered the original baudrate which was 115200 to 9600 and there was no problem. Why would they set it to high baudrate when lower works it is a mystery for me. (¿) Since I didn't encounter any example in my micro environment and limited experience, I'm very curious to know when would baoudrate start to affect a system. Some encountered examples would help a lot. I even appreciate if one can suggest a simple setup where I can spot the error or an issue due to low baudrate. I can use an Arduino and HyperTerminal to test such a communication. What I want to observe is that lets say we have the Arduino is nonstop sending some fixed data to the serial port and we read that with a program such as HyperTerminal with a baudrate of 115200. Let's say we start to lower the baudrate and we observe at a particular baudrate we get/read nonsense. What would be the idea to create such a test? I mean I need to relate the data somehow to the transmission speed aka baudrate. <Q> If you are using AT commands and typing them by hand, you won't see any difference, because the bottleneck is you. <S> If you are transmitting larger amount of data, there will be difference. <S> Try setting up a serial transceiver in loopback (connect RX and TX together) and then send 1kB of data over serial. <S> At first do it at, say, 1200 baud and then 115200. <S> You will notice the difference, because first will go in 1024/(1200/8) <S> 6+ seconds, other will go way under a second. <S> Usually people try to go for as high baudrate as the system can tolerate without failing. <S> Only when transmission errors start appearing you would start reverting to lower baud rate. <S> I have a system that has to transmit images (5MB) over serial line, even at max standard baud rate of 921k <S> it is too slow (110kBps, whole image takes 50s to transmit). <S> I have boosted the speed to non-standard 8Mbaud, thus reducing the transmit time tenfold, at the expense of extra hardware and code - have to have packet verification and error checking, hardware that supports these high baud rates, etc. <A> The baudrate determines the data rate on transmissions. <S> If you need a certain speed you can estimate the minimum baudrate needed. <S> Whether data is lost or "just" delayed depends on the operational conditions if the baudrate is too low. <S> Higher baudrates need more bandwidth on the wire, so shorter wires or "better" technologies are needed. <S> The standards like RS232 and RS485 have some informations on this matter. <S> Some background: <S> The baudrate defines how many steps per second are transferred over a serial channel. <S> The standard serial line we use can only transport 1 bit at a time, so the baudrate is equal to the bitrate . <S> Since you seem to use the common "asynchronous protocol" each transmission consists of: <S> The startbit; 5 to 8 data bits; Optionally a parity bit (mark, space, even, or odd); 1, 1.5, or 2 stop bits. <S> The protocol is called "asynchronous" because there is no need to be synchronous to any kind of global clock. <S> Between a stop bit and the next start bit any time gap is allowed, beginning with 0 (zero) and ending in infinity. <S> A standard combination is "8-N-1" which means 8 data bits, no parity, and 1 stop bit. <S> This sums up to 10 bits per 8-bit byte to transfer. <S> It is a nice value because we can easily calculate the bytes per second for a given baudrate. <S> It happens to be the default on Arduino's class Serial . <S> With 9600 <S> baud <S> the transmission speed will be at maximum 9600 / 10 = 960 bytes per second, with 115200 baud it will be 11520 bytes per second. <S> The lowest standard baudrate I know of is 75 baud, giving seven-and-a-half bytes per second. <S> Now try this little sketch. <S> Note: since I don't have an Arduino at hand it is not tested and may contain errors. <S> int round = 0;void setup() { Serial.begin(1200);}void loop() { round++; <S> Serial.print("This is round <S> #"); Serial.print(round); Serial.println(". <S> Can you follow the numbers?");} Try this with several different baudrates and especially with low values you should see an effect. <S> Note: This only holds true if there is a true serial line involved. <S> Some devices use an internal USB unit to realize a virtual COM port. <S> You can set up any baudrate and it will have no effect because the bytes are transferred with the USB's speed. <A> We have systems that consist of around 80 sensors which all talk to the host over RS-485 using MODBUS. <S> The host sends a 10-byte command to each sensor, which then responds with an 8-byte packet. <S> Running at 9600 bits per second <S> (ignoring start/stop bits for sake of simplicity) the "round-trip" communication time for the 80 sensors is around 3 seconds. <S> That is much too long for our application as there is a substantial time difference between when the first sensor is read and when the last sensor is read. <S> Therefore we bumped our baud rate up to 115.2 kbps, which reduces the round-trip time to approximately 250 milliseconds. <S> This is still very slow, but it is an order of magnitude faster. <S> We would go to a higher baud rate to reduce the cycle time, but due to run lengths and cable restrictions, going faster than 115.2 kbps could have a negative impact on signal integrity. <S> Higher baud rate is much preferred for applications where timing is critical. <S> Timing is not as critical when you're simply viewing the results on a terminal because the human brain cannot really recognize small timing variations when reading text.
The answer is pretty obvious - baud rates define data transmission speed.
How should chips with pins on bottom be drawn? When you are drawing a schematic for an IC, and there are pins in the bottom center (in my case I am drawing for the BGM13S32F512GA-V2R ), how should they be drawn? I have looked into some beginner tutorials, but everything I've found uses ICs with pins along the side only. I am using OrCAD Capture. I have also heard a debate between whether you draw the schematic based on the datasheet, or you draw it based on how the pins are used (GND on bottom, VCC on top, everything else on sides... or however). <Q> A schematic is not meant to be a drawing of a circuit, and shouldn't be used that way. <S> The "pins" in a schematic just show connections. <S> Put them wherever they need to go to make the drawing understandable, and don't worry about what they convey physically. <A> I'll start by answering your second question. <S> Usually this means power on the top, ground on the bottom, inputs on the left, and outputs on the right. <S> I generally place the pin for the bottom pad logically based on its function. <S> 9 times out of 10 the pad will be for ground, so I generally place it at the bottom of my schematic component. <S> If it's used for the supply, I put it at the top. <S> Simply put, just place it in the most logical location that will make your schematic cleanest and easiest to follow (usually this means minimizing wire lengths, and following the "signal flow direction"). <A> The style used for creating documentation depends upon the goal of the drawing. <S> When I create a "Schematic", I try to make the drawing useful for Troubleshooting and understanding the function; Visual likeness to reality is intentionally not part of my thinking. <S> When drawing a "Wiring diagram", more thought goes into making the physical reality clear, especially for Enclosures, wire and cable routes. <S> For board level drawings, I use sort of a hybrid approach where the physical layout is a small factor in that the IC's are drawn as a box and kept as 1 unit in the drawing. <S> The approach I would use for your question would depend again on the type of drawing I was creating. <S> If it's a schematic, the simple approach would be to draw it as a box with no regard for Physical accuracy. <S> Then draw lines terminating at, or slightly within, the edge of the box. <S> Add numbering either just inside or just outside the box. <S> The lines need not be in the same order as they are on the IC. <S> Draw them whatever way is most useful. <A> The data sheet you referenced essentially answers the question, if you look at figures 3.1, 3.2, 3.3, and 7.2. <S> Organize your complete circuit diagram in a similar way, though (obviously) you don't need to draw the internal architecture of the chip. <S> This is analogous to the simpler case of drawing a transistor or FET. <S> You use a conventional symbol, without bothering about the physical layout of the pins of any particular device. <S> Different transistors are packaged with every possible permutation of three pins around a circle or in a straight line! <A> It may be tempting to put pins in the same order as they are on the package. <S> After all, it makes it easier to compare the schematic and the PCB. <S> But there is a significant danger of mistakes. <S> Especially voltage supply pins are usually spread on all sides of the package, and may not follow any clear order. <S> Then when you are connecting them in schematic, mixing up or forgetting some is much easier than if all VSS pins are next to each other, and all VDD pins are next to each other. <S> I was chasing a short-circuit under BGA chip for quite some time. <S> Then I realized it was a schematic error with a completely different chip, which had its symbol drawn in the order the pins are on the package:
Generally you should place pins in your schematic symbol so that it produces the neatest schematic possible. If it's a location diagram, then draw the IC as it appears in real life.
Is there a data transfer limit for virtual serial ports I want to record 5/10 seconds of audio using an electret microphone and an STM32 nucleo development board and send it to my computer in real-time to be processed. I'm a beginner so I'm sure I'm making mistakes, but what limits the data transfer speed for a virtual serial port? I want to sample the audio at 48 kHz, at 12-bits per sample, which by calculation means I need a data transfer rate of 576 kb/s, is that possible via the virtual serial port? if not, is it possible at all with this board? I have the STM32 nucleo-144 development board, I'm using mbed ( https://os.mbed.com/platforms/ST-Nucleo-F746ZG/ ) and a electret microphone circuit ( https://docs-emea.rs-online.com/webdocs/00af/0900766b800affa3.pdf ) <Q> Both is enough for this task. <S> On STM32F303 (FS USB) I am archiving a bit more than 1MB (bytes this time) continuous data throughput. <A> This is explicitly a "X-Y problem". <S> You have a typical audio application. <S> To implement the real-time data processing for the signal, you have two basic options: Implement the most trivial and common and easy USB class - CDC (aka "virtual COM port"). <S> To accomplish the overall data processing goal, you will need to invent a method/format to pack the electret mic ADC data into UART-type stream, then use a common Windows/Linux COM-port driver to buffer and store the data, and then to develop a proprietary application do deal with your proprietary format. <S> The plus of this approach is that you control the raw data format and don't need to dig any specifications for the data stream. <S> Minuses of this approach is that you will need to develop a lot of your own code. <S> Since this is an audio device, the formal solution is to implement the audio-class USB device within the Nucleo board. <S> As I understand, there are code examples form STM development, see USB Audio device class on NUCLEO-F446RE and USB Audio device on Nucleo F446-RE with CubeMX . <S> Advantages of this approach would be all well-established libraries in all known operating systems. <S> Minus side is that you need to dig into audio specifications. <S> In all cases the bandwidth of USB (even in FS mode) is well sufficient for audio processing tasks. <A> This is a hard question, the theoretical limit would be the 480Mbit/s of the USB (because the STM32 nucleo has a phy on it), which is further reduced by the overhead of packets, which the overhead is a few percent. <S> The other problem is how fast can the software send the data, and that is dependent on what other code is running on the processor. <S> USB FTDI chips can send data a ~3Mbaud <S> so it's possible to send data at the speed that the design needs with a serial over USB link. <A> The easiest way how to implement the CDC Install CubeMX Create project for your micro. <S> Setup USB peripheral Set in middlewares CDC class Export project. <S> Enjoy
Depending on the type of the USB peripheral it is limited to the 12MBPS (FS) or 480MBPS (HS).
Full Bridge Rectifier - Voltage Drop Under Load I am trying to get 5v DC by rectifying the output of a microwave transformer i rewound (the transformer gets approx 4.2V AC on the secondary winding). I managed to get approx 5V open circuit voltage on the DC side of the rectifier but as soon as i connect a small load (tiny 6v test light which can be seen on the photos) the voltage drops from 5v to approx 4.3 volt; the AC voltage on the secondary winding of the transformer barely changes when connecting/disconnecting the light bulb). Also the voltage drop over each diode is approx 1.5v (which confuses me since i thought it should always be 0.7volt). The measurements can be seen on the photos. I have also tested with a different load (18650 charge controller ic) which gives approximately the same results. I have tried using 10A10M10 diodes bit that also gave the same results. I am using IN4004 DC Diodes. I am using a 25v 2200uF capacitor for smoothing out the dc output (I also tried with a 4700uF cap). The capacitor is in parallel with the load as can be seen in the schematic. Is this sort of voltage drop normal? Is there anything i can do to prevent it? Have i made a mistake somewhere in terms of circuitry or component choice? Any input is greatly appreciated. Thank you very much! <Q> That is pretty much normal. <S> The open-circuit voltage drop across the diodes will be very low, maybe 500mV but more like 700mV when drawing a more current, so that's 400mV drop for the two series diodes from no-load to a moderate load. <S> Your transformer voltage will drop a bit as you've seen (hopefully you've removed the magnetic shunts to decrease the leakage inductance built into a typical MOT). <S> And the ripple voltage across the capacitor will increase, resulting in a lower average voltage for the same AC input. <S> If you want a stable 5V output, you'd normally want to produce a higher voltage and regulate it down, either with a linear regulator or a switching buck regulator. <S> For something like a 7805 you'd want around 10VDC to start with, so throw away half the power. <S> The regulator needs 3V to work, and the other 2V is to allow for ripple and low mains voltage. <S> For a switching regulator you are free to use a higher voltage without incurring the same cost in efficiency, so you could produce (say) 12V nominal and regulate it down. <A> You are trying to measure the dc voltage across a diode while giving the bridge an ac input. <S> What you are actually measuring is some kind of average voltage across the diode, alternating between the forward and reverse bias cases. <S> If your transformer provides 5V rms then you would have a peak voltage of about 7V with no load . <S> After the bridge rectifier a peak ac voltage with no load of about 5.5V would be reasonable. <S> However, as soon as you start to draw current the average voltage, the dc voltage, will drop. <S> That is simply the nature of the beast. <S> According to www.electronics-tutorials.ws <S> the equivalent dc voltage from a full-wave rectified ac voltage is just 0.9 times the rms value. <A> What is meant with diode forward voltage is that diode anode has more positive than cathode, when it conducts. <S> Capacitor being connected with wrong polarity might affect the readings as well. <S> Voltage dropping under load is nothing to be concerned of, transformer windings have losses. <S> Usually, when you buy a transformer, the rated voltage is what you get at rated load current, and the voltage without any load is a bit higher than the rated voltage.
You are not measuring diode forward voltage properly - When measuring ripply AC over diode with DC setting on multimeter, on average, you have more positive on diode cathode than diode anode. I don't see anything unusual here.
8 ohms and 4 ohms speakers wired in series - volume issues I have two identical sets of ceiling speakers connected to a cheap 2 channel Chinese micro amp - one 8 ohms and one 4 ohms, wired in series per channel. The 8 ohm speaker is too loud when the volume is set ok for the 4 ohm speaker. How can I reduce the volume of the 8 ohm speaker so they are more evenly matched? Would a resistor make sense here? If so, what rating? <Q> When wired in series the same current flows through both speakers. <S> Power = <S> current 2 x resistance, so the 8 Ohm speaker will get twice as much power as the 4 Ohm speaker. <S> The human ear's response is logarithmic so this will sound like the 8 Ohm speaker <S> is abut 40% louder. <S> To make them draw equal power you can put a resistor in parallel with the 8 Ohm speaker. <S> This will reduce voltage and current in the 8 Ohm speaker, and increase voltage and current in the 4 Ohm speaker. <S> simulate this circuit – <S> Schematic created using CircuitLab <S> The resistor value is tricky to calculate because it involves solving two simultaneous equations. <S> I cheated and used LTspice with a voltage controlled resistance to simulate a resistance from 1 to 40 Ohms. <S> The speakers got equal power at ~ 19.3 Ohms . <S> A standard value of 18 or 22 Ohms should be close enough. <S> In the graph below Green is power in the 4 Ohm speaker, Red is power in the 8 Ohm speaker, and Blue is power in the resistor. <S> The horizontal axis represents the variable resistance. <S> The power dissipated by the resistor depends on the amplifier's rated output power and load impedance, but is about half the power that the speakers have to handle. <S> If the amplifier was rated for 3W into an 8&ohm; load then (at maximum output) it would put ~1.1W into each speaker and 0.5W into the resistor. <S> For a different amplifier power rating (into 8&ohm;) scale accordingly. <S> For safety you should rate the resistor for about double the expected power dissipation, ie. <S> 1W in this example. <S> Note that real speakers are not prefect resistances and different speakers may have different impedance curves (depending on diameter, cone stiffness etc.) <S> so they may not sound equally load at all frequencies. <S> The speakers may also have different efficiencies which could make one sound louder than the other, so you might need to experiment with different resistor values to get the best balance. <A> 12 ohms should be about right. <S> the resistor here will reduce the loudness of the 8 ohm and increase the 4 ohm so you may need to turn the amplification down a little, lower resistances will have a greater effect. <S> simulate this circuit – <S> Schematic created using CircuitLab <A> By now you realize that it was a dumb thing to do — you should have used identical speakers to begin with. <S> And obviously, that would be one way to fix this with minimum fuss — just replace two of the speakers. <S> Another way to address this would be to put the two 4Ω speakers together on one channel of the amplifier and the two 8Ω speakers together on the other. <S> Then you'd be able to adjust their volume independently. <S> But if you want a workaround that doesn't involve so much rewiring, a speaker volume control is called an "L-pad". <S> Search using that term in order to find suitable products.
Sure, add a resistor parallel with the 8 ohm so that it sees the same power as the 4 ohm.
Phase of Scattering Parameters I do not understand the meaning of the phase of scattering parameters. Let's consider for instance a two port network, and let's focus on S21. I know that its absolute value represents a ratio between power transmitted at port 2 and power sent in port 1. But what is the meaning of its phase? <Q> Therefore the the S parameter can describe how much a signal is attenuated AND phase-shifted in time. <S> A positive phase means that the output signal is leading the input, while a negative phase results in a lagging (delayed) output signal. <A> The history of circuit theory tells you reactive components have a phase shift relationship between voltage and current as a function of frequency. <S> As well, scattering parameters also have this complex value. <A> I think the crucial concept you are missing is the concept of phasor . <S> A phasor represents a sinusoidal signal of a given (constant) frequency. <S> It is commonly represented by a complex number. <S> Its magnitude represents the amplitude (voltage or current) of the signal; its argument or phase represents its phase difference with respect to some reference signal <S> (here: the incident signal). <S> If the incident signal and the scattered signal is represented by a phasor the scattering coefficients also represented by a complex number (though not a phasor) gives the ratio between those signals. <S> The magnitude of the scattering parameter tells you how much the signal will be amplified/attenuated; its argument (=phase) tells you how much the phase of the scattered signal will be shifted with respect to the phase of the incident signal (see multiplication of complex numbers in polar form : magnitudes are multiplied; arguments are added). <A> Consider a really simple two port network, a short length of lossless matched cable. <S> S11 = S22 = 0, and magnitudes of S12 and S21 are unity. <S> The delay through the length of cable will cause a phase shift between input and output. <S> This will be represented in the phase of S21.
The phase describes how much the signal is delayed in time from the input to the output.
Relay inside oscilloscope (Relay vs other switches) When I press the "Auto" button on my scope, I can hear the sound of a relay switching (several relays or single relay switching many times.) What are the relays doing? <Q> Probably used in the "front end" (the parts closest to the input section) for range switching/input attenuators. <S> When you switch your oscilloscope to auto it has no idea high big or small <S> the signal might be (mV or tens of volts), so it will try to set the range such that the signal is easily visible on the screen without going beyond the boundaries. <S> Mechanical switches and relays are far more "ideal" than semiconductor switches in this kind of situation. <S> Lower resistance, less leakage, less stray capacitance and able to handle high voltages. <S> They are not so good at fast switching or long life under load, but neither of those are requirements for range switching. <A> One can also use analog muxes (a pmos and nmos in parallel usually), but the resistance is dependent on the voltage, and their is leakage and coupling, which present challenges. <S> A typical relay can have mΩ's of contact resistance, an analog mux resistance curve looks like this: Source: https://www.analog.com/media/en/technical-documentation/data-sheets/ADG1611_1612_1613.pdf <S> Another thing is size and power. <S> Relays use more power, and occupy more space and are less desirable for portable applications <A> When you hit "Auto" it scans through different V/div attenuation settings and makes a measurement to decide which V/div setting to use for each channel. <S> The different V/div settings are separated by relays in the oscilloscope front end. <S> To dig in more, you can check out this talk from one of our oscilloscope ADC designers: <S> https://www.youtube.com/playlist?list=PLzHyxysSubUmxGOMVpiKLxouweh2AAlG1 <S> If you want to get to the attenuator part go to vid 3.
Usually in analog electronics a relay is the best thing to use because the contact resistance is fixed and does not degrade the signal.
Does increasing the current make a thermistor more accurate? For example, if the resistance/temperature ranges were fixed to 7500ohms (0C) and 400ohms (70C). At 30mA, the voltage would be 225V (7500ohm), and 12V at (400ohms). If the current was to 50mA, it would be 20V at 400ohms and 375V at 7500ohms. So the range between the volts at 30mA is smaller than at 50mA. Does this increase in a range increase the accuracy of a thermistor, if the analog signal is converted to a digital signal with a 10-bit ADC? If this doesn't increase accuracy, what will? <Q> Running a thermistor with a typical resistance of 1k (somewhere in your range) with 30mA will mean it's developing 30V, or dissipating 1 watt. <S> At 50mA, it will be 2.5 watts. <S> This means it will be waaaay hotter than the ambient you're trying to measure at either current, and much further from your ambient at the higher current. <S> Very inaccurate. <S> You only need a few volts swing. <A> No, a greater range of absolute voltage does not imply anything about the accuracy of the measurement. <S> The more important consideration is to maximize the range of the voltage that you're sending to the ADC, within its limits. <S> Thermistors are frequently used as one half of a voltage divider in order to create a voltage that an ADC can read. <S> It can be shown that for a given ADC range, using a larger resistor and a higher source voltage uses more of the ADC range. <S> Therefore the best configuration would be to use an infinite resistance, which means using a current source. <S> Let's say you want a range of 5V to match the input of your ADC. <S> That means you can put no more than 5 V / 7500 <S> Ω = 0.667 <S> mA through the thermistor. <S> At this current, you'll be dissipating about 3.3 mW in the thermistor, which should keep self-heating to a minimum. <S> At minimum resistance, you'll be putting 400 Ω × 0.667 mA = 0.2667 V into the ADC, which means that you're using almost 95% of the range of the ADC, which is about as good as you're going to get. <S> And the self-heating drops to 0.18 mW. <S> I keep mentioning self-heating, because if you're using a thermistor for temperature measurement, you don't want self-heating to introduce any significant error. <S> Obviously, using currents as large as what you're talking about would create massive problems in this area. <A> Increasing of thermistor current will increase it's own heat sourcing. <S> This will affect on resulting temperature of sensing element, therefor, measurement accuracity will decrease in the same way. <S> Thus, theoretically, current of thermistor should be keept as low, as possible. <S> On other hand, decreasing of current will make circuit more sensitive to an external noise (collected from the long connection lines, acted like coreless transformer, for example). <S> Well, there is an idea to use greater voltage with greater pull-up resistor that, firstly, will decrease current deviation across measurement range, and, secondly, dramatically increase voltage drop range, making circuit more suitable for ADC's. <S> In this case, You'll surely get increased resulting accuracity. <A> However doing this also increases self-heating of the thermistor, which will make the reading higher than ambient, so if you should only apply the sensing current briefly to avoid excessive heating.
Increaseing the current will increase the voltage produced across the thermistor, this will reduce the effect of noise relative to the signal voltage giving greater accuracy.
USB - C connector purely for charging a battery I'm looking for an ultra slim charging connector to attach to a li-ion battery pack. All I need are positive and negative pins. I have read that USB C supports up to 5A current. If I get a product like this: https://www.globalsources.com/gsol/I/USB-C-connector/p/sm/1164613438.htm#1164613438 And something similar for the socket end and if I solder positive and negative of the battery to VBUS and GND of the socket and positive and negative of the charger to same pins of the plug... will I be able to charge my battery at about 2A? I know it seems stupid to try use a data/power connector purely for charging but I can't find a 2 way power connector close to its compactness. Thanks in advance! <Q> Cariations of this questions are asked very often on here: to draw high currents over USB-C from a USB-C power supply, it's not enough for your device to have a USB-C plug. <S> It needs to talk the rather complicated USB-PD protocol with the power charger. <S> So, it's not that easy. <S> If you really just need a cable to transport power, barrel connectors are cheaper and smaller (at least in one direction). <S> There's a lot of two- or four-contact rectangular connectors ("MOLEX pin header") that are pretty common, too. <A> Yes, Type-C connectors, in a standard way, do support currents up to 5A. <S> In standard way the connector uses 4 contacts for ground, and 4 contacts for VBUS, making it at 1.25A per contact. <S> You can use more contacts if you need more current, but I would advise against it, to avoid port destruction in case if someone decided to plug your cable into their laptop/phone. <S> However, if you plan on productizing your charger (and device), a simple two-wire power connection will be technically illegal from Type-C standard. <S> In Type-C standard the VBUS must not be present on wires until the cable is plugged into a Type-C receptacle. <S> To achieve this you will need at least one (thin) signal wire in your charger cable. <S> This wire should be connected to one of CC pins and have a 10k pull-up to +5V. Your device receptacle, in turn, must have two 5.1k resistors to GND on each CC1 and CC2 pins of your receptacle. <S> Your charger side should sense the voltage level on CC wire, and turn VBUS on only when it senses the voltage drop on it (due to 5.1k pull down after connect). <S> A simple analog comparator with right threshold and a high-side power switch (or just a power P-FET) will do the job. <S> In this arrangement your charger (with Type-C cable) will be fully compliant and won't harm any other Type-C user devices. <A> Just connect the positive voltage and common ground of the USB C connector to the charging IC. <S> But I would take it one step further and consider getting a Power Delivery Module . <S> This allows you to supply power to the battery charge via a USB-A to USB-C cable, but also allows you to use a higher input voltage if you are using a USB-C to USB-C charger. <S> If your charger is some type of buck/buck-boost converter, then this would be beneficial since you will be able to charge your batteries faster ( if they aren't already at the max charging current ) at a higher voltage, usually ranging from 5V to 20V. Using this module, you will be able to get the 5A max current output. <S> You can not get that with a normal USB C connector, so you would need some type of Power Delivery module.
With what you described, as long as the USB C connector is connected to some type of battery charger, you should be able to use it. So you need to make sure that your design (on PCB and on cable side) uses all (4+4) pins.
Low ESR capcitors for switching pre-regulator (LM2596) In LM2596 switching regulator datasheet suggested to use low ESR capacitors. since I'm going to use LM2596 as a pre-regulator for a linear regulator to reduce the voltage drop on linear regulator, is it still necessary to use low ESR capacitor? <Q> Even if you are regulating after the LM2596, you would still want to maintain proper stability. <S> Use the recommendation of the datasheet and go with the low ESR cap. <A> Use Low ESR caps in the feedback loop and for the filter capacitor after the inductor. <S> The input capacitor, is not as important, but the design might incur more loss and have slightly higher ripple with a higher ESR on the input capacitor. <S> I believe the regulator will help with regulaton, especially with short traces. <S> The best way to find out if the design will be tolerant to a low ESR cap is to simulate it in SPICE. <S> Source: https://www.onsemi.com/pub/Collateral/LM2596-D.PDF <A> It is extremely important to use capacitor that is withing the allowed ESR limits, but as low as possible, because linear regulators can be bad at handling high frequency ripple so less ripple is better. <S> If you look at LM338 datasheet, ripple rejection is very near 0dB at 150kHz and it can even amplify ripple at higher frequencies. <S> Consider adding extra filtering between regulators, or consider if some other regulator is better.
You will note that the recommendation for a low ESR capacitor is to maintain stability of the control loop.
Does a "frequency-modulated" signal mean that the frequency isn't consistent? Does a "frequency-modulated" signal mean that the frequency isn't consistent? and therefore it has a specific bandwidth of frequencies is can cover?? <Q> A frequency modulated signal means that the frequency is shifted around a carrier frequency. <S> This can be done digitally or via analog multiplication. <S> What it looks like is a sine wave speeding up or slowing down. <S> Source: http://www.justscience.in/articles/what-are-the-applications-of-frequency-modulation/2017/06/02 <S> Yes, it normally has a specific bandwidth, some of the reasons being is radio licenses and bandwidth are in short supply <S> (there are a limited number of frequency ranges). <S> With frequeny modulation one can transmit more information in a given section of bandwidth. <S> Most digital wireless communication uses frequency modulation. <A> This instantaneous frequency will only be of a limited range. <S> If you provide more context we could help more. <S> therefore it has a specific bandwidth of frequencies is can cover <S> This is true of all signals. <S> All signals have a specific bandwidth (aka range of frequency components that make up the signal via fourier transforms). <S> Even amplitude modulated signals where the instantaneous frequency is constant (the carrier) have a bandwidth of frequency components that compose the signal. <A> Yes, it is a range of frequencies that are to be expected. <S> In other words, frequency is a function of the modulated signal (or "input signal" if you want).
Frequency modulated means the instantaneous frequency of the signal is varied, with the instantaneous frequency carrying the information in the signal.
Solar charge controller with input from a laptop charger efficiency Instead of connecting a solar panel to my 12v 30a PWM solar charge controller, I want to connect a laptop charger to make a DIY UPS. Will the solar charge controller constantly draw the maximum output of my laptop charger @ 19v 4.74 (90W) and waste the excess power when the battery is full and the connected devices use less power? <Q> Just at a quick glance, 30A * 12V = 360W, so it seems like your charge controller will probably try to draw more power from your "charger" than it is capable of supplying. <S> The "charger" will probably go into some kind of over-current fault mode. <S> On the whole, not recommended. <S> There may be some type of charge controller out there that would work in this application. <S> Another option is to use an AC powered battery charger. <A> The problem with boost converters is not so much over-current, but rather under-voltage from the load impedance being much lower than source on startup. <S> Always compare your MPT impedance of source vs load. <S> You can do this in one of two ways; The <S> Vmp/Imp=Zmp <S> the slope of the V vs I <S> , PV curve =Zmp, which for PV current source happens to be the same at any MPT point for different solar inputs. <S> You best bet is to choose a battery charger with MPPT control and run your 12V device off battery. <S> Check the router design inside for DC-DC converters and specs. <S> Does it use 12.0Vdc for anything and depend on this being well regulated? <A> PWM solar charge controllers use a crude method of regulating charge voltage and current - they simply connect the solar panel directly to the battery, relying on the panel and wiring to limit peak current. <S> If/when voltage or current exceeds the controller's limits it disconnects the panel for a period, with the PWM on/off <S> ratio determining average voltage or current. <S> The battery acts like a large capacitor to smooth the voltage and supply load current during the PWM 'off' periods. <S> When your laptop power supply is connected to a 12V battery through the PWM controller it will go into over-current protection and may shut down completely. <S> To prevent this you could add some resistance in series. <S> The voltage the resistor needs to drop is 19V-12V = 7V, and the maximum permitted current is 4.74A, so the resistance required is 7V/4.74A = ~1.5&ohm; or higher. <S> It could dissipate up to 7V <S> x <S> 4.74A = <S> 33.2W <S> , so it should be rated significantly higher eg. <S> 50W. <S> A fully charged 12V lead-acid battery floats at ~13.2V. <S> At this voltage the power supply will only be able to deliver ~(19V-13.2V)/1.5&ohm; = <S> ~3.9A. <S> If you try to draw more than this the battery will start to discharge. <S> If the load draws less than this the charge controller will reduce its PWM ratio until the average power supply current equals the load current. <S> At lower load current the resistor drops the same voltage, but wastes less power. <S> With this scheme the resistor wastes ~30% of the power so the efficiency is ~70%. <S> That's not as good as a proper UPS, but not horrible either. <A> Your assumptions re both chargers & solar controllers seems incorrect. <S> I'll assume you are using a lead acid battery - although similar statements apply to eg LiIon batteries. <S> Essentially all equipment intended to charge lead acid batteries will only take such input energy as required to charge the battery at any stage during charging and once fully charged. <S> During charging low cost PWM controllers will usually supply whatever current the source will supply when directly connected to the battery and MPPT controllers will attempt to maximise power transfer from the source. <S> In other than very low controllers, cost current is usually able to be limited to a preset maximum (which is chosen based on battery specifications). <S> Lower cost mains chargers tend to supply whatever the battery will take within their capabilty. <S> Chargers or controllers of more than basic spec will provide a "boost charge" at the end of the charging cycle. <S> However, once charged either device will float the battery and use only such energy as is required by the battery in 'float' mode plus whatever standby current the charger requires. <S> (LiIon batteries are not usually "floated" - charging is terminated when complete. <S> (Floating at lower than full voltage is permissible but this is not usually done.)) <S> No "proper" battery chargers or solar controllers consume high levels of energy once the battery is charged. <S> Controllers designed for wind turbine use will divert the input to a "dump load" once the battery is charged due to the need to not allow a wind turbine to operate unloaded.
It seems likely that a properly designed mains input battery charger will meet your need.
What are "sense" pins in 8-pin PCI Express power plug? I have an ATX PSU that has two 6pin connectors for PCI Express power. Both have Gnd on the lock bar side, and +12V on the opposite side. I lost modular cables that came with the PSU, and it made me to start to dig. According to this answer on our sister site , in 6-pin configuration pin 5 is Sense A. In 8-pin, pin 6 is sense A (same position as Sense A in 6-pin config) and pin 4 is Sense B. But what are these pins supposed to do ? I know that most people just connects Gnd there and calls it a day, and I could too, but I wanted to ask about the meaning behind it. It's called "Sense" for a reason, I suppose. So, what is that reason? <Q> Sense pins are connected to ground at power supply or adapter cable. <S> It is not used by the power supply for "remote voltage sensing" to compensate for voltage drop over the wiring. <A> These are used by the power supply to compensate for voltage drop in the cables and connector. <S> These are low-current returns to the power supply. <S> The PS senses the actual voltages on the PCI board and adjusts the outputs so that the yellow pins are as close to 12 volts as possible and the black pins are as close to zero volts as possible. <S> For example, say that the +12v has a 0.25v drop because of oxidation on the connector pin producing resistance. <S> The green sense pin is not affected by oxidation due to the very low current flow so the PS senses this and raises the +12v output by 0.25v so that the board receives 12.0v. <S> Similarly, the blue sense pin allows sensing voltage drop on the black wires, allowing the PS to drive the black wires slightly negative to compensate. <S> Looking at the location of the sense pins, I believe @justme is correct. <S> This allows the PCI card to detect what size connector is plugged in and so know how much power can be drawn safely. <A> The 6 pin PCIe power connector comes in a 6.25A (75W) version and a 12.5A (150W) version. <S> The 6.25A version only requires that two 12V power connections be present, and only two ground connections need to carry current. <S> The 3rd middle ground connection can be used to sense that the connection is plugged in. <S> The 12.5A version uses 3 12V power wires and 3 ground wires to carry current. <S> An additional two groundable sense connections can be added to the side to tell the PCIe device that a 12.5A compliant power connector has been inserted. <S> Many power supplies that are 12.5A compliant have an additional 2 pin ground connector on the side of the 6 pin connector that can be used to make it to an 8 pin connector. <S> If a 12.5A 6-pin connector on an older power supply is connected to a newer 8-pin PCIe device, the PCIe device should not turn on because it doesn't know if a 12.5A compliant connection has been used. <S> If all 6 wires are present on the 6 pin connector and the power supply can handle the current, then it is safe to get an adapter that will convert the 6 pin connector in to an 8 pin connector simply by grounding the two additional connections on the 8 pin connector. <S> The 2 sense connections on a 8 pin PCIe board could be permanently connected to the 6 pin ground connections with a little bit of solder and wire. <S> This would convert it in to a 6 pin connection. <S> Just be careful to not use any 6 pin PCIe power connectors that only have 2 12V wires, and thin wires that are smaller than AWG 20. <S> If one fails to connect, all the current will go through the remaining wire and it could melt. <S> If this is bad advice, then online sellers that sell Molex to 8 pin <S> PCIe adapters should stop selling those first. <S> That could easily end up runing everything through one wire.
This allows the PCIE card to detect if a supply cable is connected or not, and whether a 6-pin supply is connected to 8-pin socket to indicate less power is available.
Charging 2 super capacitors in series I have recently received my order of 2 2.7V 500F super capacitors from eBay. After much searching, my yet unanswered question is:Can I connect both of these capacitors in series and charge with 5V running absolutely no risk of explosion or damaging the caps? <Q> If you connect them in series you have to ensure the voltage is divided evenly at all times <S> so it never goes over 2.7V for each one. <S> Normally this is true just by regular series connection, but if one shorts out or something alike then you will end up applying 5V the other capacitor. <S> simulate this circuit – <S> Schematic created using CircuitLab edit: I added the schematic of the circuit, now the values of the R are not really well thought off, but if you can switch them out once the voltage of your capacitors is close to 2.2V <S> then their value does not matter much as long as they are equal, it is better if they are large so it doesn't load your source <S> You only need them during charging so you could add a switch to each resistor to cut them out of the circuit so they dont discharge your capacitors after you turn off your voltage supply. <A> Unfortunately you cannot just connect them in series because of two effects. <S> The first is the tolerance in the value of the capacitance. <S> A +-20% variance is normal in capacitors (it could be bigger or smaller depending on the specific model). <S> If one of your capacitors is 500*1.2=600F, and the other is 500*0.8=400F, then the voltage across the first will be 2V and the voltage across the second will be 3V, which will damage it and or make it explode. <S> The other effect is the leakage current. <S> All capacitors have a leakage current, and on supercapacitors it can be quite large. <S> If one capacitor leaks more than the other, which is pretty much guaranteed to happen, then the voltage on the one that leaks less can go up, possibly going above 2.7V and damaging the capacitor. <S> You could use resistors to deal with these issues, but the current required may be quite large and unacceptable (usually it should be at least 10X the maximum expected leakage current). <S> You could also implement some active circuit, and there is more info here <A> Depending on charging rate, overall power consumption (is this AC line powered or e.g. solar charged battery), active clamps might be appropriate. <S> Either two independent circuits that measure the voltage across a supercap and turn on an appropriate resistive load across that supercap if it gets close to 2.7V, or a single circuit that measures the difference in supercap voltages and can turn on a load across one or the other.
I don't know what is your final design, but if you can connect a large resistor in parallel to each capacitor, they would help to ensure the voltage stays somewhat even in case anything happens.
Will current usage increase with different length of wire? Given all other factors are same, with only the wire length increased (for example, 0.5meter increase to 50 meters,) will the current consumption/usage increase as well? Can anyone explain? <Q> Given all other factors are same, with only the wire length increased (for example, 0.5 meter increase to 50 meters,) will the current consumption/usage increase as well? <S> Generally there are two types of load to consider. <S> Dumb loads <S> This includes things like lightbulbs, heaters and motors whose current draw varies with voltage. <S> As the voltage drops the current will fall too. <S> Adding extra cable will cause a voltage drop at the load so the current drawn will fall. <S> Smart loads <S> This includes things like computer / TV power supplies, motor speed controllers, LED lighting PSUs, etc. <S> These differ in that they regulate the output they provide and when the voltage decreases they increase the current draw from the mains to provide the required output power. <A> Anything is possible, depending on the type of load: current may decrease, increase or stay the same. <S> Let \$R_{line}\$ be the resistance of the line which increases with length; here are examples for each case: <S> If the load is a constant resistance load (e.g. a good resistor) or a constant voltage load (e.g. Z-diode in reverse direction or a LED in forward direction (both idealized)) <S> current will decrease if resistance in line increases: \$I=\frac{V}{R_{load}+R_{line}}\$ or \$I=\frac{V-V_{diode}}{R_{line}}\$ <S> If the load is a constant power load, e.g. a DC/DC converter with constant load or a SMPS for a laptop computer that provides constant power over a voltage range (assuming effciency will stay about the same), current will increase if resistance in line increases (of course only to a certain extend until minimum operating voltage at load is reached): \$I=\frac{P}{V-V_{line\_drop}}=\frac{P}{V-IR_{line}}\$ <S> If the load is a constant current supply, e.g. for a LED array, of course current will stay the same if the line resistance increases (of course only to a certain extend until minimum operating voltage at load is reached): \$I= <S> I_{const}\$ <A> Current flow will decrease. <S> Wire has resistance. <S> The longer the wire, the higher the resistance, the less current will flow. <S> If you connect a load right at the battery and measure the voltage at the load then the voltage will drop very little. <S> If you connect the load to the battery with very long wires and measure the voltage at the load, then you will measure a lower voltage. <S> If you simultaneously measure the voltage at the load and at the battery, then you will find the voltage at the battery to be higher than at the load. <S> This is due to the resistance of the wires. <A> The resistance of any conductor is given by the formula: R=rho*(L/A). <S> Rho is the property of the material. <S> L is the length of the conductor and A is the cross sectional area of the conductor. <S> If everything is kept constant here, increasing the length would increase the resistance of the wire. <S> Increased resistance means decrease in current flow. <S> Ohm's Law!! <A> $$ R_F = \frac {\rho \ell} {A}$$ Resistance of feeder will increase as length increases. <S> \$R_F\ \alpha\ \ell\$ Regardless of the circuit (AC/DC, 3-phase, single-phase), <S> material and wire size, voltage at the load will be less than the supply voltage (Basic KVL where load and feeder form a series circuit). <S> If a constant power load is connected to your increasing length load, resistance/impedance increases, current increases because voltage drop to feeder increases. <S> Impact depends on relationship between feeder resistance and load. <S> In a lab, a short feeder is irrelevant when powering a 1kΩ. <S> But if the feeder is not sized correctly (cross-sectional area), say supplying a 10hp motor at 250ft, the load may be affected to an extent that the load does not operate correctly. <S> Too much voltage is lost to the feeder. <S> Motor may start at no-load, but fail under load. <A> A longer wire has a proportionally larger resistance, so the current usage will not increase. <S> Rather would it decrease as the total load increases, but for most circuits the resistance of the connected wire is negligible compared to the "load" itself so that you won't see any change in power consumption or drawn current.
Adding extra cable will cause a voltage drop at the load so the current drawn will increase.
USB female type A that "floats" on a PCB I'm looking for a USB-connector that "floats" in line with the PCB. See my very crude drawing to see what I mean: I know these connectors exists for USB micro (like this one ) but I am unable to find it for type A-female. Do they exist, and what search terms should I use to find this type of connector? <Q> What you are looking for is called a "mid mount" connector. <S> They exist in the type A receptacle but will most likely be the USB 3.0 type instead of the 4-pin USB 2.0 type that you are probably thinking about. <S> Any mid mount connector that you choose to use should have metal shell fingers that sit on pads on the surface of the circuit board that get soldered to secure the connector. <S> Something just mounted by four leads as you made in your picture will not standup to use of the large Type A plug that would mate with it. <A> Molex has two mid-mount connectors: 482580001 and 482580002 . <S> You can find them on some online suppliers by searching for those product codes, or filtering/searching for "mid-mount" or "board cutout" type connectors. <S> Universal Serial Bus (USB) Shielded I/ <S> O Receptacle, Type A, Right-Angle, Reverse Type, Mid Mount with Side Flange and Beveled Metal Pins <S> , Gold (Au) Plated, Lead-Free Image from the <S> Digi-Key listing for the 482580001 connector, description from Molex site. <A> here you are: link to the usb-con <S> you need :) <S> How I found it: - On the same site on which you found the micro-USB (www.tme.eu), I went to the USB-connector cathalog link to the usb connectors - As "type of connector" <S> , I selected "USB-A" - <S> As "Mechanical Mounting", I selected "on PCBs" - As "Electrical mounting", I selected "SMD" - As "Connector", I selected "Socket" (for female) <S> The one I found was the 6th result.
I used a search phrase of "mid mount USB Type A connector" and found several suitable styles.
Why do three series connected 1.2 V NiMH batteries read 4.16 V when charged? I have a 7.2 V Dremel. The old battery pack was dead, so I had to replace the six rechargeable batteries inside. The originals were 1.2 V 700 mAH AA; I could only find 1.2 V 1100 mAH NiMH AA batteries. The pack design has three of the batteries grouped together, with nickel strips (shown in blue) connecting them in series: There are two groups of these three-packs inside, but the two battery groups are not connected to each other. So the pack exposes four terminals, two plus and two minus. The Dremel dedicated charger is marked as outputting DC 9 V at 0.20 A and indicates a 3 hr charge time (but does not shut off by itself). I figured that since the original batteries were 700 mAH and I replaced them with ones rated 1100 mAH, I would need to charge them about 1.57 times longer. So, I charged them for 4 hours and 45 minutes. (At the end of this period, both the charger and the pack got really warm.) When I checked the battery pack connections with a multi-meter, it read 4.16 V and 4.18 V, for each of the two groups. My questions: 1) Shouldn’t each battery group read 3.6 V (i.e. 1.2 V per battery x 3 batteries in series)? Why am I seeing 4.16 V and 4.18 V instead? 2) Why is the charger output marked as 9 V? Wouldn’t this voltage damage the batteries? Or is the 9 V divided into two and each group gets 4.5 V? Even so, 4.5 V is still greater than the series-sum of 3.6 V. 3) Is my duration calculation correct? If not, and since the charger is rated at 0.20 A, how long should I be charging the battery pack? EDIT PER winny's SUGGESTION: <Q> No, that’s perfectly normal. <S> See the charge curve for NiMH below. <S> It’s about 1.45 V / cell fully charged. <S> NiMH are tolerant to overvoltage as long as the current is kept low enough. <S> Medium solution is higher charge current and thermal cutout. <S> Any fast charger would need to sense zero delta V or negative delta V for charge termination. <S> EDIT: <S> It’s actually so low “overvoltage” (9-6*1,45=0.25 V) <S> I’m suspecting <S> the charger is delta V sensing or “smart charger”. <S> At least 1.1 Ah/0.2 A = 5.5 hours since they are charged in series to reach 80 % SOC. <S> Probably +50 % more for 100 % SOC. <A> Complements other answers: 7.2/6 cells = 1.2V/cell - which is the NOMINAL loaded voltage of NimH. <S> So they are operated in series - unless they are lying. <S> 9V/6 <S> = 1.5V cell - which is slightly on the high side for fully charged <S> and they MAY keep on charging indefinitely, which will risk destroying them. <S> 'Once upon a time' NimH would tolerate C/10 trickle charge - which here = 1100 <S> mAh/10 = 110 mA or less. <S> Cells over about 1800 mAh removed the recombining chemicals and would accept NO trickle charge. <S> Whether low mAh cells still retain the O2+H2 recombining material is TBD. <S> NimH charged at <= C/10 <S> will automatically stop accepting current if V/cell is <= <S> about 1.45V, say 1.4V to be safe. <S> 1.4 x 6 = 8.4V 1.45 <S> x 6 <S> = 8.7V. <S> So if their claimed 9V was a little low it could do quite a good job <S> BUT the cells getting hot is a sure sign of end of charge. <S> IF they stay hot then the battery pack is being roasted. <S> Reducing the charge voltage with say a series diode <S> MAY be enough to make a useful difference. <S> Or ALWAYS monitor temperature of cells and stop when they get hot. <S> Nimh charge terminate methods: Negative voltage inflection <S> - see winny's graph. <S> Absolute temperature Increased rate of temperature rise. <S> Threshold voltage (varies with charge current) <S> Timed (must start empty) <S> I longish ago built solar portable lights using NimH cells. <S> In an environment where charge current is variable and intermittent and the cells are solar heated the ONLY method that works is one based on the cell's terminal voltage. . <A> 4.18 volts for three cells is about 1.4 volts per cell, which is reasonable for a fully-charged NiMH cell straight off the charger. <S> Since the battery pack has separate connections for each group of three cells, the charger can connect the two groups in series for charging (and the tool may connect the groups in series or parallel for use) <A> 1) No, because a battery only nominally 1.2V per cell. <S> They are empty at about 0.9V and charging voltage can be as high as 1.4V to 1.6V. 2) <S> 9V could be the no-load float output, or maybe it charges the two stacks in series? <S> 3) <S> Approximately yes. <S> One sign of batteries being full is when they start rapidly heating up. <S> But it is impossible to say if the charger is simply resistor limited or is it a constant current charger or something more clever.
Since you did measure that the charger bridged the two 3.6 V packs in series, forming a 6S1P configuration during charging, the 9 V and 0.2 A makes perfect sense. This is commonly referred to as slow charging where the cells stay hot when fully charged until you remove them.
Why am I getting an error when setting the TRIS registers with PIC? I am new to PIC programming, and I am trying to blink an LED using the PIC10F206. It has 4 I/O pins. I understand I must declare them as inputs or outputs, but the IDE I am using (MPLAB) keeps giving me an error when I declare the TRIS register. Attached is a picture. The datasheet lists the TRIS GPIO register as having the name "TRISGPIO" but the compiler throws this error: "Unable to resolve identifier TRISGPIO". Am I just getting the name of the TRIS register wrong? Attached is a picture of the TRIS GPIO register from the datasheet. [EDIT] Looks like it is a header file problem. When I highlight "xc.h", right click, and go to navigate -> go to definition, the xc.h file is pulled up but the code does not recognize my pic.h file which is where the header file for my chip is. See imagae for the xc.h screenshot. I tried pointing to the directory in the compiler options but no luck. Any thoughts? Below is a picture of my compiler options <Q> You are using a baseline PIC, so you can refer to the XC8 compiler User's Guide section <S> 5.3.10: <S> 5.3.10 <S> Baseline PIC MCU Special Instructions <S> The Baseline devices have some registers which are not in the normal SFR space and cannot be accessed using an ordinary file instruction. <S> These are the OPTION and TRIS registers. <S> Both registers are write-only and cannot be used in expression that read their value. <S> They can only be accessed using special instructions which the compiler will use automatically. <S> The definition of the variables that map to these registers make use of the control qualifier. <S> This qualifier informs the compiler that the registers are outside of the normal address space and that a different access method is required. <S> You should not use this qualifiers for any other registers <S> When you write to either of these SFR variables, the compiler will use the appropriate instruction to load the value. <S> So, for example, to load the TRIS register, the following code: TRIS = <S> 0xFF <S> ; You will find the following definition in the relevant .h <S> file for this chip: <S> // <S> Register: <S> TRISGPIO#define TRISGPIO TRISGPIOextern volatile __control unsigned char TRISGPIO __at(0x006); Which indeed contains the aforementioned __control qualifier. <S> However it also defines TRISGPIO! <S> And I tried a simple program and <S> it does work if xc.h (which loads PIC10F206.h) is included, for either TRIS or TRISGPIO <S> (MPLAB-X , XC8 V2.10). <S> Edit: As @brhans mentions, make sure you've properly configured MPLAB-X for that exact chip. <S> Your tree should look something like this: If it's not properly configured it may be including the wrong file or nothing, I can't say as I've run into that issue myself. <A> Try putting a space between the "TRIS" and the "GPIO" when declaring it. <S> TRIS is an instruction that expects a register name (address) as its argument. <S> GPIO is a valid register name for small (8-pin PICs). <S> [Edit} I've just re-read the question. <S> I'm not competent in C <S> but I can tell you what the instruction is supposed to look like in assembler: <S> movlw <S> 0x00TRIS <S> GPIO <S> This may help you get where you need to be. <A> You can create a set_tris() in ASM just after last #include of your source file, and calling later from your code in main(). <S> int set_tris(void){ <S> #asm movlw 0x00 //value <S> to load tris 0x06 //value of GPIO register <S> #endasm ; return 0;} <A> So turns out it was a problem with #include <S> For some reason MPLAB couldn't figure out the following code in the xc.h file <S> I navigated to the specific header file containing my PIC's declarations, and included it manually into the main source file. <S> Works now.
You also have error markers beside your GPIO names, which implies your .h file is not being properly included or is missing something.
Total current current equation in P-N junction So Okay I get why there is a negative sign in the equation for diffusion current of holes since the gradience is actually negative if we put our coordinate axis in the bottom left corner for example in the 2nd picture where x points right side..But that then makes Jp overall positive, There is no negative term in the equation, How is that possible if you actually know that in a no bias situation the total current is equal to zero? The same applies to Jn. The only possible way I could understand this is by substituting E with a negative sign since it points to the left in the 2nd picture. Any clarifications ? <Q> I think it's all in terms of conventional current flow, even when it's electrons. <S> That's what my book seems to say. <S> Modern Semiconductor Devices for Integrated Circuits - Chenming C. Hu <A> In bias condition, considering the figure that you posted holes will diffuse towards the right side since their concentration is lower in the depleted region. <S> At the same time holes in the depleted region will move to the left side following the field. <S> This means that we have fluxes in different direction. <S> Same thing applies for the electrons. <S> It's important to understand that field is only in depletion region and diffusion is at the boundary between depleted region and n/p doped region <S> (Also the depleted region is doped of course) <A> To make sure we are on the same page, all of these various \$J\$ terms are conventional current density. <S> An electron moving right constitutes a \$J_n\$ to the left. <S> Now, lets get the signs of all these current components correct. <S> My coordinate system is positive to the right. <S> \$J_{p(Drift)}\$ is a negative quantity. <S> Holes drift left. <S> \$J_{p(Diff)}\$ <S> is a positive quantity. <S> Holes diffuse right. <S> \$J_{n(Drift)}\$ is a negative quantity. <S> Electrons drift right. <S> \$J_{n(Diff)}\$ is positive quantity. <S> Electrons diffuse left. <S> Yes. <S> the electric field is negative in the depletion region. <S> This is how I arrived at my drift currents above. <S> I get why there is a negative sign in the equation for diffusion current of holes since the gradient is actually negative <S> That is not why the negative sign is there. <S> The negative sign is there since its a part of Fick's law of diffusion: \$J=-D\nabla\psi\$ where \$D\$ is diffusivity and <S> \$\psi\$ <S> is concentration. <S> That is, things will diffuse in the opposite direction of the concentration gradient. <S> Moving from high concentration to low concentration. <S> In the equation for \$J_{n(Diff)}\$ <S> the sign is reversed. <S> This is to account for the negative charge of electrons.
The only possible way I could understand this is by substituting E with a negative sign since it points to the left in the 2nd picture.
Hole in PCB due to corrosive reaction? I have an Xbox One controller where the X button did not work anymore. I disassembled it today, to find the reason for this. It turns out, there is a hole in the motherboard near the X button: There was this sticker right above the hole (it seems that "XJY" is an abbreviation of "Shenzhen Xinjiaye Electronics Technology Co., Ltd.", which is a chinese company that builds such motherboards): Now to my question: is it possible that the adhesive of the sticker in combination with the flowing electricity on the motherboard has caused a corrosive reaction, resulting in the hole? I think there is no mechanical cause, since the wires in the hole are still largely intact and the "missing" part of the motherboard was not inside the controller. <Q> It's unlikely. <S> The hole is very nearly circular, and has smooth edges, indicating the hole was deliberately drilled. <S> Somebody has added thin wires to try to repair the damage, indicating the defect was found and "repaired" before the product left the factory. <S> Most likely the hole is meant for mounting the board into the housing. <S> The damage to the surface layers is a fabrication defect that they tried to fix rather than scrapping the board. <A> Those wires you see aren't the bonded copper traces, which you think are left behind from the corrosion. <S> They are wires added after the fact to correct the broken board. <S> Considering that holes are normally drilled after most of the board is done in a typical PCB fabrication, and if it was a one off problem that would have been pitched, this is likely one of MANY boards that look exactly like this. <S> The cost benefit of ditching these and a second run probably exceeded the cost of having these reworked manually. <S> Material is more expensive than labor over there. <S> The source issue was likely fixed before the next batch, but retooling a manufacturing process is also expensive, so who knows. <S> As to how that happened, speculation but: <S> Drill was too big for the material <S> OR <S> the wrong bit was mounted OR the speed/torque was set wrong, so it chewed up the PCB material <S> OR <S> the depth was off so the collet holding the drill bit crushed the board OR Questionable material. <S> Material is cheap, Labor is cheaper, but that leads to quality issues. <A> Corrosion does not make perfectly round holes in PCBs... <S> The edges of the hole kinda look like a crater, soldermask and tracks have been ripped off like someone hacked at it with an x-acto knife. <S> This looks like a very "ghetto" botched repair job. <S> Now, what does the virgin PCB looks like? <S> A bit of internet search and... <S> There doesn't seem to be anything in the area of the hole that would blow up and then require someone to fix the damage, like a tantalum capacitor. <S> Also, an important detail is that there is no hole on the original board. <S> So, where does the hole come from? <S> Now, the edges of the hole are really destroyed. <S> This wasn't done by drilling. <S> It looks like it has exploded. <S> I'm going to go with the hypothesis that this controller was shot with a .22LR bullet from the back of the board (referring to your photo), then refurbished. <S> Please check <S> if the diameter of the hole is the same as a .22 bullet, I'd really like to know! <A> It is a reject board that had a hole punched in it before scraping to prevent reuse. <S> However some company (probably in the electronics recycling industry) figured since the boards were basically free the cost of the bare minimum repairs would turn a reasonable profit in the off brand controller market. <S> This is why some manufacturers have gone the extra step of punching/drilling holes or a quick saw cut into the reject parts to hopefully prevent another company from offering a comparable product at a lower price. <S> They will already have lost the sale by the time your x button stops working and you realize you saved a few bucks to literally buy remanufactured garbage. <A> As others have mention - this is not corrosive damage and those wires were put on after the fact to attempt to hide the mistake. <S> As to how the hole actually got there... <S> You can see that board was assembled before the hole was created. <S> If it was broken during manufacturing of the board, the solder wouldn't be nice and perfect like that. <S> Adding those solder points is usually one of the last steps in manufacturing. <S> In fact, it would should have been rejected in QA and not made it to solder flow. <S> My guess is someone messed up and broke the board during final assembly and this was their attempt to correct the issue by hand. <S> Others have pointed out that this hole isn't on a "typical" xbox one controller board, but you didn't provide information as to the brand to properly verify how it should look. <S> Did you buy this secondhand? <S> Though generally if someone does that it's just kind of left there since no one will see it.
Edit: As pointed out in comments, another possibility is that the hole was drilled deliberately to remove some incorrectly designed wires, and then the added wires you see were added to replace the connections broken by the drill. Check the edges of the hole - the solder points are all intact along with some of the underlying board. You can tell by the blobs of solder that were manually added, like the blob above the carbon trace that is not perfect (unlike what you see from wave soldering), or the solder added to the trace above that (north west of your red X), which you would never see on a etched board. If so, it is also possible that someone took it apart to look at it, messed it up and then tried to cover it up. There are several copper wires soldered across the hole... This happens more often then most people realize.
Very high precision zero crossing detection I have a setup that is equivalent to a guitar string with pickup, with signals of similar frequency range. I have a simple zero crossing detector with a comparator and then just count the interval between low-to-high transitions using a microcontroller. I got to wondering how accurately I could measure the frequency. If I wanted to measure it to several decimal places with certainty, what changes should I make? Is there a good resource on high precision zero crossing? The output voltage from the coil is low (~3mV pp), so should I choose a comparator with lower input voltage offset? Or would it make more sense to add a preamp stage? How about propagation delay - if a comparator has delay of x for one transition, will it be x for every transition? <Q> You'll likely want to measure the total of many zero crossings (and divide by 'many' before taking the reciprocal).. or average many measurements.. <S> otherwise noise in the signal will unduly affect your measurement, and you don't need the answer that quickly. <S> You can measure easily to a couple hundred nanoseconds with a typical 8-bit micro (so 200ns in a 5kHz note would give you 3 digits) but that doesn't translate necessarily into results anywhere near that stable or accurate. <A> It would make sense to have an amplification (op-amp) and then a comparator with a positive and negative supply. <S> Concerning the digital part, it has a lot to do with the speed/type of your MCU and your code. <S> There are many ways to improve accuracy at this level but it depends a lot on the MCU. <S> Now if you want to go hardcore, you can also sample the amplified output of the coil with an A/D and perform a Fourier Transform to get the main frequency and also all the sub-frequencies. <A> if a comparator has delay of x for one transition, will it be x for every transition? <S> Delay may vary depending on the amount of 'overdrive' <S> (voltage beyond the comparison threshold). <S> Here's a graph showing response of the LM397 to various overdrives:- <S> In this case the difference between 5mV and 50mV is 300ns, which corresponds to 0.03% of a 1kHz cycle. <S> If the signal amplitude is constant then the delay doesn't matter because it will be consistent from one cycle to the next. <S> Delay is generally smaller with greater overdrive, so the effect of amplitude variation on delay can be minimized by making the signal much larger than necessary for detection. <S> Your 3mV signal is too low for reliable detection by most comparators, so you should amplify it to at least 100mV. <S> If the amplitude varies greatly over time (eg. <S> after plucking a guitar string) then you might have to use an AGC (Automatic Gain Control) circuit to maintain relatively constant amplitude at the comparator, though if the signal is clean and pure just applying more amplification may be enough. <S> To prevent noise and harmonics from causing 'false' zero crossings you should apply some hysteresis to the comparator, which makes it ignore small variations in the signal. <S> Bandpass filtering the signal before detection will also help to remove noise. <S> After detection you have to decide how to get the 'several decimal places' of precision that you want. <S> However at low audio frequencies the required 'gate' time is very long. <S> Measuring the time between cycles is much faster, but makes the result sensitive to variation between cycles. <S> The simplest solution is to measure the time of several cycles and then divide by the number of cycles, which averages them. <S> If your lowest frequency is eg. <S> 50Hz <S> then you can measure and average the periods of at least 50 cycles in 1 second. <S> At higher frequencies the resolution of the timer may become significant. <S> If so then just switch to pulse counting mode. <S> At 1kHz you have 3 digits past the decimal point with a 1 second 'gate' time.
The simplest way is to just count pulses over a fixed time period long enough to get the desired precision.
Electrical connection during car jump start I have seen the procedure to jump start a car when the battery is dead. What I don't understand is that why is one end of the cable connected to the metal frame chassis of the car which has the dead battery? Why don't we connect the positive negative of the working battery to the positive and negative of the dead battery and start the car? Can someone explain why do we do this connection and what's wrong in shorting the both battery terminals? How does the battery get charged during the correct procedure? <Q> The negative terminal of a car battery is connected to the car's chassis and engine block, so connecting the negative jumper cable to the chassis does (eventually) make an electrical connection to the negative terminal of the battery. <S> The lead-acid batteries used in cars can generate hydrogen and oxygen gas while being charged. <S> If you make a spark in the presence of these gases they may explode. <A> Normally, the last of the 4 clamps is connected away from the battery. <S> That is because there could be a spark, and one or both batteries do emit flammable hydrogen gas. <S> This way, that spark is away from the battery. <S> I have seen a battery explode, messy and dangerous. <A> In addition to what others have mentioned there's also less resistance if you connect it to the frame. <S> Everything is negative-ground including the 'starter'. <S> This means that during a jumpstart the current flows through the positive into the car frame and then back to the negative terminal. <S> It might be small but it's some nonetheless. <S> This leads to higher voltage drops and more heating.
It is recommended that you make the last connection of jumper cables some distance from the battery to reduce the danger of explosion as making that last connection will very often make a spark. If you put the negative jumper onto the 'dead negative terminal' you add some resistance to the path.
How to calculate the size of ground plane we need to use in a PCB (printed circuit board) design I searched about the ground plane and found out the ground plane is a good solution to reduce noise and get a seamless ground. How do I calculate the size of the ground plane for a PCB? When and where don't we need a ground plane? (Actually the ground plane is destructive.) <Q> There are two aspects of a ground plane: its performance , and its appearance . <S> The first is important. <S> The second is not. <S> Unfortunately, many people starting out concentrate on the second, to the detriment of the first. <S> A ground plane should be as big as it needs to be. <S> That is, it should be present at or near* <S> every connector, every IC, every supply decoupling capacitor, and every signal track. <S> A ground pour is not necessarily a ground plane . <S> It can have the appearance of a ground plane. <S> It's easy to do a pour as the last step laying out a board, it's difficult to check whether it does in fact connect all the points that should be connected. <S> There are two ways to make a good ground plane. <S> One is to lay out a ground track as you make the board, so that you can see it that it follows all the signals, visits every IC directly, before you confuse yourself with the pour. <S> If you couldn't route the track, then the pour will not have made the necessary connection either! ' <S> Letting the pour take care of that connection' is asking for trouble. <S> The other is to dedicate a ground plane layer, and then don't cut it up . <S> Too often we via a track onto the ground plane layer 'just for a few mm' to ease tracking problems, and then another, and another. <S> Done once or twice, with a short track, it's OK. <S> Done excessively, it cuts the ground plane to shreds, and it can't do its job. <S> You can via across breaks on the other side, but it's hard work to make sure you've caught them all, best not to cut in the first place. <S> Near to a chip antenna, where the data sheet gives you a detailed footprint. <S> Under the inverting input of a fast high impedance op-amp, where excess capacitance can hurt stability. <S> Near to pins carrying dangerous voltage, opto-couplers will often tell you what distances are required for what voltages. <S> *Near. <S> For low frequency boards, 'near' can be quite big, without hurting performance. <S> For RF boards and logic boards with fast edges, 'near' usually means directly underneath. <A> Apart from specific cases: high voltage (where you need as big of an insulator between your signal and ground), controlled impedance (RF, high speed, the plane might help you also if done correctly) or very low noise analog circuitry. <S> A complete and uniform ground plane bellow your signal layer should not do any harm. <S> There might be other scenarios where having power planes could harm your signals, please comment below, I would love to know ! <S> Also as a matter of good practice, one should try to remove as little copper as possible. <S> Thus on your top and bottom layers you will usually do ground pours (fill in the empty space with gnd) first to help you route the gnd (on all your decoupling caps for instance) but also to prevent massive amount of copper to be dissolved during the manufacturing process. <S> If your design can be routed on two layers, and you can live with the noise from a less uniform gnd/power planes. <S> You should always challenge yourself to use as little layers as possible, just for the sake of being more cost efficient with your designs. <A> {Summary: this answer illustrates the method of DESIGNING a Ground Plane to achieve 60dB isolation between regions.} <S> Ground planes are excellent for attracting electric field flux. <S> Instead of your signal trace having the burden of handling all the displacement-current, the Ground Plane provides the return path for most of the displacement-current. <S> Thus high dV/dT situations, such as electric-train speed-controllers, should use Ground planes. <S> And low noise situations, where the Znode * C_couple_aggressor_victim <S> * dV <S> /dT is larger than your trash budget (you do have a trash budget, right?), a Ground plane is the first step in improving signal_noise_ratio (ENOB). <S> Audio people tell you to use Star Grounds, not using Ground planes. <S> People at diyAudio.com use Gnd planes in "NJFET RIAA preamplifier" discussion thread, with the various components of low-noise (Rnoise approximately 50 ohms, thus 0.8 nanoVolts <S> /rtHz) RIAA over a Ground plane and the RIAA corner at 50Hz constricts the total integrated random noise to about 0.8 nanoV <S> * <S> sqrt(50 * PI/2) == 0.8nV <S> * sqrt(81) = 0.8nV <S> * 9 <S> == <S> 7 nanoVolts RMS noise. <S> And a Ground plane (inside a steel or thick aluminum case; power supply was in separate case, 1 or 2 meters away) was used in achieving that 7 nanoVolt RMS floor. <S> For audio RIAA circuit. <S> Here is the floor plan of the 7 nanoVolt RIAA low-noise audio PCB GND plane. <S> The key to audio performance is: minimize crosstalk between Input (Left) and 60dB_stronger Output (Right). <S> simulate this circuit – <S> Schematic created using CircuitLab
There are only a few instances where a ground plane should not be, and in almost all cases, it's where a component's data sheet tells you not to.
Derating ampacity of a wire if it is stranded? Do I need to derate a wire if it is stranded? Or will a #12 wire have the same ampacity whether or not it is stranded? Is this affected at all by AC versus DC? <Q> AC vs DC also has no impact, unless your "stranded" wire is actually litz wire. <S> As litz wire is very expensive and has to be specially ordered, it almost certainly is not--you would know if you had it. <A> The wire manufacturer's specifications will give you the current carrying capability, be it stranded wire or solid core. <S> You do not have to derate the values in that specification - the manufacturer is giving you finished values. <S> You just have to interpret them correctly. <S> Both AC and DC ratings are usually given for the max. <S> current and voltage. <S> The max. <S> current may be specified at different temperature rises, as higher current will dissipate more lost power in the cable because of its resistance and heat the wire. <S> The max. <S> voltage comes from the max. <S> breakdown voltage of the wire insulation, if it is insulated. <A> Yes, the current capability of a wire will decrease as the number of strands increases. <S> This short article on AlphaWire's website gives a good explanation: http://www.alphawire.com/en/Company/Blog/2015/June/Helpful%20Tips%20for%20Cable%20Ratings <A> The stranded is simply more flexible and suited for uses where it will be moved around. <S> IF the stranded cable was Litz wire it would have insulated strands woven in a special way making it able to carry more high frequency AC-current, as it lowers the "Skin effect". <S> But for DC there will be no difference. <A> Wire gauge is defined by conductor diameter. <S> Ampacity is defined by the max. <S> current of wire for a specified temperature such as; 60, 75, <S> 90'C which depends on the electrical/thermal insulation of the wire. <S> Max. <S> current rating however is usually done at 30'C <S> The thermal resistance rises sharply with the number of strands in the core, unlike the electrical resistance, so Ampacity is maximum for a single core. <S> e.g. wire with 1kV PVC insulated wire from 1 core to >=43 stranded cores REF AWG D(mm <S> ) <S> D(in <S> ) area(mm2) <S> R Cu(ohm <S> /km) <S> 1 <S> 3 4-6 7-24 25 - 42 <S> > <S> =43 <S> cores12 <S> 2.1 <S> 0.081 <S> 3.3 5.2 34 20 16 <S> 14 12 <S> 10 Amps ambient temperature 31 - 40 <S> 'C: correction factor = <S> 0.82ambient <S> temperature 41 - 45 <S> 'C: correction factor = <S> 0.71ambient <S> temperature 45 - 50 'C: <S> correction factor = <S> 0.58 <S> More graphs and details for AWG12 https://www.engineeringtoolbox.com/amps-wire-gauge-d_730.html
It will be very close to identical current rating for a stranded and a solid core cable with the same wire guage. Gauge is defined by cross-sectional area, not outside dimension, so the stranded #12 wire has the same per-length resistance as the solid #12 wire. Ampacity is more complicated (it depends on type of insulation, what other wires are nearby, and other details), but whether it's solid or stranded again does not matter (or at least not significantly so) for this. By the term 'derate', you must mean to reduce a manufacturer's specified ratings to compensate for some aspect of its application.
Why are boost/buck converters never fully integrated/all-in-one? Moving from small arduino applications to higher-power applications for work, I've had to switch from using simple linear regulators to switching boost/buck converters any time a different voltage to what is available is required. This has been much more of a learning curve than I anticipated; things like switching frequency, inductance and capacitance values all need to be taken into account, and there are almost always several external components (for example, the inductor) required for the converter to function. My question is, why are these additional components almost always external? I'm not complaining, I've been learning a fair bit from the switch and it's not too bad a curve, I'm just curious as to why there seem to be no plug-and-play ICs (at least not that I've found for 24V, 3A output, correct me if I'm wrong) like there are for smaller linear regulators. My guess would be too much heat generated? <Q> It has to do with the limitations of silicon. <S> You can easily make lots of transistors in silicon, and connect them together. <S> Resistors and capacitors, particularly ones with any precision, are harder, and you can't really make them to handle any appreciable energy. <S> It can be done, but they take up lots of space (I'm not sure how things stand now, but at one point the typical internal compensation cap for a unity-gain stable op-amp took up most of the silicon). <S> Inductors are right out. <S> You need low-resistance windings, you need a magnetic core, you need size . <S> Which is why your switching supply has a chip that does everything that can be reasonably done in silicon, then it's surrounded by (relatively) big capacitors and a (relatively) big coil. <A> There are also this compact micoModules available, for example from Analog Devices [Linear Technology]. <S> It integrates controller, inductor and capacitors inside one package: <A> Fully integrated DC-DC convertor is often called "DC-DC module" (examples) . <S> DC-DC module are quite common on sale. <S> If you look on KC705 <S> (Kintex-7 evalution kit) <S> you will see four fully integrated DC-DC moduls on the right side. <S> Notably that many manufacturers of capacitors make their own DC-DC moduls with components that are not sold.
The reasons for using such modules are: Lack of resources for SMPS development High Integration for low PCB area applications High resistance to mechanical shock
What is the purpose of R1 in this circuit? By pressing the stop-button once the LED turns off and stays off. But if you remove R1 the LED only stays off if you are pressing the stop-button. Why does R1 affect this? <Q> T1 and T2 are connected in a classic 'thyristor' bistable latch. <S> If T1 is on, it biasses T2 on, which holds T1 on. <S> If T1 is off, it doesn't bias T2 which stays off, which doesn't bias T1 which stays off. <S> That's in theory, with perfect transistors. <S> In practice, all transistors have some degree of leakage current. <S> This is amplified by the other transistor, and an off circuit may turn on if there's sufficient gain. <S> Without R1, any current flowing through R3 is amplified by T2. <S> R1 sets a threshhold current, below which T2 will stay off. <S> That's about 700mV/10k <S> = 70uA, which is waaaay above any likely leakage through T1. <S> However, there will also be leakage through T2, which will be amplified by T1. <S> Assuming reasonable hFE, say <= <S> 300 <S> , the max permissible T2 leakage now becomes 230nA, which is still easily met by good quality transistors at room temperature. <S> The LED does not prevent T2 turning on, as its threshhold voltage for significant conduction will be above 700mV. <A> This is a bit of a guess, but I suspect it's due to leakage current through T1. <S> Suppose R1 is removed, and the circuit is in the off state. <S> If there is a leakage current into the collector of T1, that will be sourced mainly through the b-e junction of T2, since the LED has a higher Vf than the T2 b-e junction. <S> Even if this leakage is only a few nanoamps, this is amplified by T2, causing base current at T1, which increases the T1 collector current, and so on in a positive-feedback loop until the circuit returns to the on state. <S> Edit: <S> On some further thought, the initial leakage current might also be through the T2 c-b junction, which has a specified 15 nA leakage with \$V_{CB}\$ of -30 V. <S> In any case the positive feedback mechanism that leads to the circuit turning back on is the same. <S> With R1 present, the collector current of T1 can be sourced through this resistor rather than through the T2 base, so the feedback loop is broken and the circuit remains off when you want it off. <A> The resulting voltage from I(leak)*R1 must be low <150mV across Vbe, to do this. <S> Even if the transistors had almost no Early Effect leakage current Ice, say 0.1pA, you still need R1 even if it was 100M. <S> All it takes is the base voltage to rise to the level of expected LED collector current or in this case ~ <S> 670mV with 20 mA. <S> This is not a precise derivation, but close enough to prove the concept and assumes no leakage current, which would require a smaller resistor. <S> \$I_{ce}*R_{ce}/h_{FE1}=0.67V\$ <S> e.g. <S> 1e-13 <S> * 1e15 /100 = <S> 1 <S> V <S> (Rce=100G, Ic=1e-11=10pA) <S> The resistor R1 has a purpose of reducing Vbe which controls Ice above the internal leakage effect. <S> You can look up Early Effect and see how that controls this leakage current. <S> Since the two complementary transistors form a positive feedback loop (similar to an SCR), unless you have an R1 across Vbe of EITHER Q1 or Q2 (or both), the LED will always be conducting current according to hFE Ib or limited by the collector resistor and remaining voltage drop. <S> Notice in the schematic <S> , I moved R1 down intentionally to demonstrate, this works in either position. <S> Let me demonstrate this with a simulation using transistors that have no leakage current and a fixed current gain of 100. <S> Both switches are momentary with RESET on left and SET on right. <S> Design criteria: R4 to limit the LED current. <S> then R3/R4< hFE to ensure Q2 saturates <1V @ <S> 20mA <S> then R2/R4< hFE for same reason with R2 moved up for clarity <S> then choose R1 < Vbe/Ib with R1 to absorb all the leakage current at low Vbe <<150mV
R1 serves to stabilize this latch in the OFF state, when disabled by STOP.