id
int64
5
1.93M
title
stringlengths
0
128
description
stringlengths
0
25.5k
collection_id
int64
0
28.1k
published_timestamp
timestamp[s]
canonical_url
stringlengths
14
581
tag_list
stringlengths
0
120
body_markdown
stringlengths
0
716k
user_username
stringlengths
2
30
1,693,978
30 years of DOOM: new code, new bugs
Today marks the 30th anniversary of the first game in the DOOM series! We couldn't miss the event. To...
0
2023-12-11T06:48:17
https://dev.to/anogneva/30-years-of-doom-new-code-new-bugs-14ho
cpp, gamedev, doom, programming
Today marks the 30th anniversary of the first game in the DOOM series\! We couldn't miss the event\. To honor the occasion, we decided to see what the code of this legendary game looks like after all these years\. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jum7oww1yw3bck14pjb3.png) ## Introduction DOOM will forever go down in history as one of the greatest classic games that had a huge impact on the gaming industry\. The game was revolutionary for its time\. It set new gameplay and technical standards for all first\-person shooters\. Its fast\-paced, tense gameplay, dark atmosphere, and impressive weapon arsenal have forever captured the hearts of gamers\. Not to mention the amazing OST\! As they say, "The heavy metal isn't playing because you're fighting demons, it's playing because the demons are fighting you\!" Obviously, it's impossible \(and also boring\) to read through all the thousands of code lines in an article\. So, today we will literally become the Doomguy and follow a quote from the latest DOOM Eternal: > Against all the evil that hell can conjure, all the wickedness that mankind can produce\. We'll send unto them, only you\. Rip and tear until it is done\. We'll rip and tear all the evil that hell can conjure\. And what's the worst possible evil for a programmer? Bugs, of course\! We will find them and ~~kill~~ fix them\. [GZDoom v4\.11\.3](https://github.com/ZDoom/gzdoom/tree/g4.11.3) serves us as a landing area\. GZDoom is one of the most popular ports of the original DOOM game\. And the [PVS\-Studio](https://pvs-studio.com/en/) static analyzer is our assistant\. Well, let's get it started\! > A note from the author: section titles correspond to the chapter titles in the game\. Why? You may think that each name corresponds to a type of a bug in one way or another; for example, in the "Knee\-Deep in the Dead" section, there are bugs related to dangling pointers or references, or dead code\. And in the "The Shores of Hell" section\.\.\. okay, stop\. Actually, I just felt like it\. Sounds cool, right? > > > > By the way, a few years ago we [checked](https://pvs-studio.com/en/blog/posts/cpp/0662/) the source code for a port of the DOOM Engine on Linux\. We also recommend it to all the fans of the series :\) ## Knee\-Deep in the Dead ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ddwa8q20n3t3mcnowmla.png) So, we successfully landed in the space complex\. We immediately see demons that have filled the station, as well as the path further down the complex\. Our objective is to clear the complex of all demons\. **Fragment N1** The first enemy we encounter is a zombieman who, for some reason, is pointlessly banging his head against a wall\. Take your time to figure it out, and I've just aimed the crosshair on the target\. ```cpp void SWPalDrawers::DrawUnscaledFuzzColumn(const SpriteDrawerArgs& args) { .... int fuzzstep = 1; int fuzz = _fuzzpos % FUZZTABLE; #ifndef ORIGINAL_FUZZ while (count > 0) { int available = (FUZZTABLE - fuzz); int next_wrap = available / fuzzstep; if (available % fuzzstep != 0) // <= next_wrap++; .... // fuzzstep doesn't change here. I swear by BFG. } .... } ``` The analyzer warning: * [V1063](https://pvs-studio.com/en/docs/warnings/v1063/) The modulo by 1 operation is meaningless\. The result will always be zero\. [r\_draw\_rgba\.cpp 328](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/rendering/swrenderer/drawers/r_draw_rgba.cpp#L328) * [V547](https://pvs-studio.com/en/docs/warnings/v547/) Expression 'available % fuzzstep \!= 0' is always false\. [r\_draw\_rgba\.cpp 328](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/rendering/swrenderer/drawers/r_draw_rgba.cpp#L328) As we can see, the *fuzzstep* variable is declared and immediately initialized with *1* in the code snippet\. After that, its value doesn't change anywhere\. The *while* loop keeps checking the *if \(available % fuzzstep \!= 0\)* condition over and over again expecting changes\.\.\. \(damn, that's the wrong game\)\. However, *fuzzstep* doesn't change anywhere, and we divide *available* modulo by *1* each time, and each time the result is *0*, and we check it for inequality with *0*, and\.\.\. Let's get this over with and move on\. **Fragment N2** We see a trap in our path, someone has left it for us\. Great work by the other marines: they found a possible path for demons to take and planted a mine\. But when the demons decide to come through here\.\.\. nothing happens\. They forgot to activate the mine\.\.\. ```cpp StringPool::Block *StringPool::AddBlock(size_t size) { .... auto mem = (Block *)malloc(size); if (mem == nullptr) { } mem->Limit = (uint8_t *)mem + size; mem->Avail = &mem[1]; mem->NextBlock = TopBlock; TopBlock = mem; return mem; } ``` The analyzer warning: [V522](https://pvs-studio.com/en/docs/warnings/v522/) There might be dereferencing of a potential null pointer 'mem'\. Check lines: 100, 95\. [fs\_stringpool\.cpp 100](https://github.com/ZDoom/gzdoom/blob/6ce809efe2902e43ceaa7031b875225d3a0367de/src/common/filesystem/source/fs_stringpool.cpp#L100) Let's take a closer look\. The *mem* variable is declared and immediately initialized with the result of the *malloc* function\. As we know, *malloc* can return *NULL*, and the developers knew this as well\. They even made the necessary check in the form of *if \(mem == nullptr\)* but forgot to write what to do if the condition is true\. By the way, if you don't check the result of the *malloc* function, this [article](https://pvs-studio.com/en/blog/posts/cpp/0938/) may be a good read for you\. It remains on the developers' conscience what exactly they forgot to write\. Perhaps there should be a call to *[std::exit](https://en.cppreference.com/w/cpp/utility/program/exit)* here, or some value returned, or something else\. Let's not risk triggering the mine and keep going\. **Fragment N3** On our way, we meet an imp who doesn't even notice us or try to attack us\. It does nothing at all\. ```cpp void PClassActor::InitializeDefaults() { .... if (MetaSize > 0) memcpy(Meta, ParentClass->Meta, ParentClass->MetaSize); else memset(Meta, 0, MetaSize); .... } ``` The analyzer warning: [V575](https://pvs-studio.com/en/docs/warnings/v575/) The 'memset' function processes '0' elements\. Inspect the third argument\. [info\.cpp 518](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/gamedata/info.cpp#L518) Using *[memset](https://en.cppreference.com/w/cpp/string/byte/memset)*, the developers wanted to overwrite the memory that *Meta* points to with zeros\. The problem is that we get into the *else* branch only if *MetaSize* is 0\. For *memset*, such a call means, "Fill a memory area of 0 bytes at this address with this value" == "do nothing"\. Moving on\. **Fragment N4** We meet another imp who, unlike the previous one, immediately attacks\. ```cpp void FDecalLib::ParseDecal (FScanner &sc) { FDecalTemplate newdecal; .... memset ((void *)&newdecal, 0, sizeof(newdecal)); .... } ``` The analyzer warning: [V598](https://pvs-studio.com/en/docs/warnings/v598/) The 'memset' function is used to nullify the fields of 'FDecalTemplate' class\. Virtual table pointer will be damaged by this\. [decallib\.cpp 367](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/gamedata/decallib.cpp#L367) This is where the *memset* function, already discussed above, is called on the *newdecal* object of the *FDecalTemplate* type\. This way they want to zeroize the object\. The thing is that the *FDecalTemplate* type contains a pointer to a virtual table: ```cpp class FDecalTemplate : public FDecalBase { .... } class FDecalBase { .... virtual const FDecalTemplate *GetDecal () const; virtual void ReplaceDecalRef (FDecalBase *from, FDecalBase *to) = 0; .... }; ``` The *sizeof* operator returns the size of the object considering this pointer size\. By zeroing the data members in this way, the pointer to the virtual function table is also zeroed\. A good way to ~~shoot yourself in the foot~~ get damaged by a demon\. **Fragment N5** Walking a little further, we come across a marine sitting alone: ```cpp class PaletteContainer { public: PalEntry BaseColors[256]; // non-gamma corrected palette .... }; static void DrawPaletteTester(int paletteno) { .... for (int i = 0; i < 16; ++i) { for (int j = 0; j < 16; ++j) { PalEntry pe; if (t > 1) { auto palette = GPalette.GetTranslation(TRANSLATION_Standard, t - 2)->Palette; pe = palette[k]; } else GPalette.BaseColors[k]; // <= .... } .... } .... } ``` The analyzer warning: [V607](https://pvs-studio.com/en/docs/warnings/v607/) Ownerless expression 'GPalette\.BaseColors\[k\]'\. [d\_main\.cpp 762](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/d_main.cpp#L762) Unfortunately, we can't find out exactly what his mission was\. He's been here alone for too long, and he's already forgotten\. **Fragment N6** After leaving the marine and turning around the corner, we meet the first boss, the Baron of Hell\. It can cause a lot of trouble: ```cpp uint8_t work[8 + // signature 12+2*4+5 + // IHDR 12+4 + // gAMA 12+256*3]; // PLTE uint32_t *const sig = (uint32_t *)&work[0]; ``` The analyzer warning: [V641](https://pvs-studio.com/en/docs/warnings/v641/) The size of the '&work\[0\]' buffer is not a multiple of the element size of the type 'uint32\_t'\. [m\_png\.cpp 143](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/m_png.cpp#L143) The *work* array is declared as an 829\-element array of the *uint8\_t* type\. The *sig* pointer is then initialized by casting the *work* array pointer to the *uint32\_t\** type\. Such code may violate the strict aliasing rules\. The *work* array is aligned on a 1\-byte boundary, and the *sig* pointer requires alignment on a 4\-byte boundary\. If the start address of the array is not a multiple of 4, we can get unpredictable results\. For example, the processor may refuse to work with unaligned data \(ARM\)\. We can resolve the issue by using the *alignas* specifier: ```cpp uint8_t alignas(uint32_t) work[....]; ``` The array address is now aligned on a 4\-byte boundary\. However, the code continues to use memory poorly\. 829 is not a very good number to divide by 4, and there may be some other issues as well\. We've completed the first chapter\. Let's move on to the next one\. ## The Shores of Hell ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/w7b7xqr2wu3cu5fq8eks.png) We can encounter different demons in the code, even ones that seem weak at first but turn out to be much stronger\. For example, here's one in the 1730 line of the *vectors\.h* file\. **Fragment N7** ```cpp constexpr DAngle nullAngle = DAngle::fromDeg(0.); ``` The declaration seems harmless\. What can go wrong? Nearby, however, we notice a wounded marine lying on the ground\. What if I told you that all translation units included in this header file would have their own copy of this constant? Now imagine that each of these variables occupies 100 bytes in memory\. And there are 100 of them\. And they're included in 100 other files\. Do you get the picture? Forget it, it's not the worst problem\. Things are much worse if it's a variable declaration of your custom type that has a non\-trivial constructor that performs heavyweight logic\. In this case, we have to wait a little \(I'm lying, we have to wait a lot\) when the application starts\. The analyzer warning: [V1043](https://pvs-studio.com/en/docs/warnings/v1043/) A global object variable 'nullAngle' is declared in the header\. Multiple copies of it will be created in all translation units that include this header file\. [vectors\.h 1730](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/utility/vectors.h#L1730) There are 11 more such declarations in this file\. Let's deal with the demon and heal the wounded marine\. When using the C\+\+17 standard, we can just add the *inline* specifier to the declaration: ```cpp inline constexpr DAngle nullAngle = DAngle::fromDeg(0.); ``` When using an older standard, we need to separate the declaration from the definition: ```cpp // vectors.h extern const DAngle nullAngle; // some.cpp const DAngle nullAngle = DAngle::fromDeg(0.); ``` Okay, now the marine is back in action, and we can continue our adventure\. **Fragment N8** Another enemy stands in our way\. ```cpp PalettedPixels FVoxelTexture::CreatePalettedPixels(int conversion, int frame) { uint8_t *pp = SourceVox->Palette.Data(); if (pp != nullptr) { .... } else { for (int i = 0; i < 256; i++, pp+=3) { bitmap[i] = (uint8_t)i; pe[i] = GPalette.BaseColors[i]; pe[i].a = 255; } } } ``` The analyzer warning: [V769](https://pvs-studio.com/en/docs/warnings/v769/) The 'pp' pointer in the 'pp \+= 3' expression equals nullptr\. The resulting value is senseless and it should not be used\. [models\_voxel\.cpp 145](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/models/models_voxel.cpp#L145) Here we see that we get into the *else* branch if *pp == nullptr*\. As a result, after traversing the loop, we get a pointer to something that isn't safe to use\. I doubt the developers would use the pointer afterwards\. However, if there is a chance that you could shoot yourself in the foot, this is likely to happen\. Or, in our case, if a marine gives a demon a chance to bite them, the demon is likely to do so\. **Fragment N9** We enter a room and meet a demon who is behaving unusually\. Let's take advantage of this and examine ~~its innards~~ it in more detail\. ```cpp void DLL InitLUTs() { .... for (i=0; i<32; i++) for (j=0; j<64; j++) for (k=0; k<32; k++) { r = i << 3; g = j << 2; b = k << 3; Y = (r + g + b) >> 2; u = 128 + ((r - b) >> 2); v = 128 + ((-r + 2*g -b)>>3); } } ``` The analyzer warning: * [V610](https://pvs-studio.com/en/docs/warnings/v610/) Unspecified behavior\. Check the shift operator '\>\>'\. The left operand is negative \('\(\- r \+ 2 \* g \- b\)' = \[\-496\.\.504\]\)\. [hq4x\_asm\.cpp 5391](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/hires/hqnx_asm/hq4x_asm.cpp#L5391) * [V610](https://pvs-studio.com/en/docs/warnings/v610/) Unspecified behavior\. Check the shift operator '\>\>'\. The left operand is negative \('\(r \- b\)' = \[\-248\.\.248\]\)\. [hq4x\_asm\.cpp 5390](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/hires/hqnx_asm/hq4x_asm.cpp#L5390) Here is a little warm\-up exercise for your brain\. The code contains unspecified behavior if it's compiled using a standard lower than C23 or C\+\+20\. Let's see where this unspecified behavior is hidden\. Bitwise shift operators are used in expressions where values are assigned to the *u* and *v* variables\. The thing is that the intermediate values for which the shift occurs can be negative\. The comments to the right of the lines we're interested in give the ranges for the *r*, *g*, *b* variables, and intermediate values\. ```cpp void DLL InitLUTs() { .... for (i=0; i<32; i++) for (j=0; j<64; j++) for (k=0; k<32; k++) { r = i << 3; // [0 .. 248] g = j << 2; // [0 .. 252] b = k << 3; // [0 .. 248] Y = (r + g + b) >> 2; u = 128 + ((r - b) >> 2); // ([0..248] - [0..248]) >> 3 v = 128 + ((-r + 2*g -b)>>3); // (-[0..248]+[0..504]-[0..248])>>3 } } ``` So, the loops are shifted bitwise to the right of negative values resulting in unspecified behavior\. Most likely, everything will be fine on the main platforms that DOOM runs on\. Keep in mind, though, that people [run DOOM](https://www.ign.com/articles/weirdest-devices-that-run-doom-1993) on the weirdest of devices :\) **Fragment N10** The next demon we meet is a strange one\. The demon's strangeness comes from its appearance: as soon as we look at it, it changes the way it looks\. ```cpp int FPCXTexture::CopyPixels(FBitmap *bmp, int conversion, int frame) { .... uint8_t c = lump.ReadUInt8(); c = 0x0c; // Apparently there's many non-compliant PCXs out there... if (c != 0x0c) { for(int i=0;i<256;i++) pe[i]=PalEntry(255,i,i,i);// default to a gray map } .... } ``` The analyzer warning: [V519](https://pvs-studio.com/en/docs/warnings/v519/) The 'c' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 475, 476\. [pcxtexture\.cpp 476](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/formats/pcxtexture.cpp#L476) The *c* variable is first initialized by reading a value from the buffer\. Then the *0x0c* value is immediately assigned to it\. Such demons are common here: * V519 The 'dg\.mIndexIndex' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 820, 829\. v\_2ddrawer\.cpp 829 * V519 The 'dg\.mTexture' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 885, 887\. v\_2ddrawer\.cpp 887 * V519 The 'LastChar' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 226, 228\. singlelumpfont\.cpp 228 * V519 The 'flavour\.fogEquationRadial' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 164, 167\. gles\_renderstate\.cpp 167 * V519 The 'flavour\.twoDFog' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 162, 169\. gles\_renderstate\.cpp 169 * V519 The 'flavour\.fogEnabled' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 163, 170\. gles\_renderstate\.cpp 170 * V519 The 'flavour\.colouredFog' variable is assigned values twice successively\. Perhaps this is a mistake\. Check lines: 165, 171\. gles\_renderstate\.cpp 171 * \.\.\.etc\. However, the weirdness of this fragment doesn't end there: the assignment is immediately followed by a check — *if \(c \!=0x0c\)*\. Obviously, the *then* branch is unreachable\. The analyzer also issues the following warning: * [V547](https://pvs-studio.com/en/docs/warnings/v547/) Expression 'c \!= 0x0c' is always false\. [pcxtexture\.cpp 477](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/formats/pcxtexture.cpp#L477) Well, maybe there was just reading from a buffer with the check, but then the devs decided that this code branch should become unreachable\. Or maybe not, who knows what these demons are up to :\) **Fragment N11** There are two twin cacodemons in this code fragment\. ```cpp int32_t ANIM_LoadAnim(anim_t *anim, const uint8_t *buffer, size_t length) { .... length -= sizeof(lpfileheader)+128+768; if (length < 0) return -1; .... length -= lpheader.nLps * sizeof(lp_descriptor); if (length < 0) return -2; .... } ``` The analyzer warning: * [V547](https://pvs-studio.com/en/docs/warnings/v547/) Expression 'length < 0' is always false\. Unsigned type value is never < 0\. [animlib\.cpp 225](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/animlib.cpp#L225) * [V547](https://pvs-studio.com/en/docs/warnings/v547/) Expression 'length < 0' is always false\. Unsigned type value is never < 0\. [animlib\.cpp 247](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/textures/animlib.cpp#L247) Or is it the same cacodemon appearing in two places at once? The *length* variable is of the *size\_t* type, which is unsigned\. So, *length < 0* checks are completely meaningless\. P\.S\. By the way, such errors can cause vulnerabilities\. It's not such a big deal for a game\. However, this is a serious [potential vulnerability](https://pvs-studio.com/en/blog/terms/6441/) in applications where information security is crucial\. Since some size isn't calculated correctly, we can use it and try to overflow a buffer\. **Fragment N12** This brings us to the boss of this chapter, the Cyberdemon\. I'm sure not many readers have come across it\. While viewing the analyzer report, I accidentally opened the [hudmessages\.cpp](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/g_statusbar/hudmessages.cpp#L858) file\. And I want to show you a function call with up to 25 ARGUMENTS\! ```cpp void DHUDMessageTypeOnFadeOut::DoDraw(int linenum, int x, int y, bool clean, int hudheight) { DrawText(twod, Font, TextColor, x, y, Lines[linenum].Text, DTA_VirtualWidth, HUDWidth, DTA_VirtualHeight, hudheight, DTA_ClipLeft, ClipLeft, DTA_ClipRight, ClipRight, DTA_ClipTop, ClipTop, DTA_ClipBottom, ClipBot, DTA_Alpha, Alpha, DTA_TextLen, LineVisible, DTA_RenderStyle, Style, TAG_DONE); } ``` I wonder, what kind of marine would write that\. However, my surprise was so great because the *DrawText* [declarations](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/2d/v_draw.h#L279) are not so demonic after all: ```cpp void DrawText(F2DDrawer* drawer, FFont* font, int normalcolor, double x, double y, const char* string, int tag_first, ...); void DrawText(F2DDrawer* drawer, FFont* font, int normalcolor, double x, double y, const char32_t* string, int tag_first, ...); ``` Only 7 arguments are required here :\) However, the call to this [variadic function](https://pvs-studio.com/en/blog/terms/0069/) has grown up to 25 arguments\.\.\. Maybe this is a demonic pattern, but there are a number of reasons why we shouldn't do this\. 1. Code readability gets worse: it's much harder to understand what such a function does\. In this example, we can see from the name that the function seems to render text\. However, it takes a lot of time to figure out what each passed argument does\. So, if the function had a different, less comprehensible name, all that would be left to do is to cry\. 1. The probability of an error increases: we are much more likely to pass arguments in the wrong order or with the wrong value\. And if we need to rewrite the function, we have to rewrite all the fragments where it's called, which also increases the chance of an error\. 1. It's harder to maintain the code: imagine we need to change the function\. It already takes a bunch of parameters, so it will take a while to change its body\. It doesn't end there, though\. We need to find all the areas where the function is called and maintain the code there\. 1. The principle of sole responsibility is violated: such a function is likely to do several things at once — both drawing and juggling\.\.\. In reality, this is indicated by the excessive number of arguments used\. This leads to code complexity and redundancy\. Here are the solutions I see: combine some of the arguments into some structure and pass it; or break the function into smaller functions that take only the arguments they need\. Let's move on to the next chapter\. ## Inferno ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/spwrkkken7cxgar2cjos.png) **Fragment N13** Just like in the first chapter, we see a very strange demon right away: ```cpp ExpEmit FxVMFunctionCall::Emit(VMFunctionBuilder *build) { int count = 0; if (count == 1) { ExpEmit reg; if (CheckEmitCast(build, false, reg)) { ArgList.DeleteAndClear(); ArgList.ShrinkToFit(); return reg; } } .... } ``` The analyzer warning: [V547](https://pvs-studio.com/en/docs/warnings/v547/) Expression 'count == 1' is always false\. [codegen\.cpp 9405](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/scripting/backend/codegen.cpp#L9405) The code is probably the result of refactoring, but it still looks scary when a part of the program logic is simply ignored\. This demon is hardly a boss, more like an imp\. The next one, though\.\.\. **Fragment N14** \.\.\.turned out to be a very sneaky spectre\. Try to find it in the code snippet below: ```cpp static void CreateIndexedFlatVertices(FFlatVertexBuffer* fvb, TArray<sector_t>& sectors) { .... for (auto& sec : sectors) { for (auto ff : sec.e->XFloor.ffloors) { if (ff->top.model == &sec) { ff->top.vindex = sec.iboindex[ff->top.isceiling]; } if (ff->bottom.model == &sec) { ff->bottom.vindex = sec.iboindex[ff->top.isceiling]; } } } } ``` It's hard, isn't it? With our auto\-aiming weapon, though, we can easily spot the spectre\. The analyzer warning: [V778](https://pvs-studio.com/en/docs/warnings/v778/) Two similar code fragments were found\. Perhaps, this is a typo and 'bottom' variable should be used instead of 'top'\. [hw\_vertexbuilder\.cpp 407](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/rendering/hwrenderer/hw_vertexbuilder.cpp#L407) Look at the lines where *vindex* is set for the *ff\-\>top* and *ff\-\>bottom* data members\. They are very similar to each other\. Even more than they have to be, I would say\. The thing is, they are most likely the result of copy\-paste\. In the line with *ff\-\>bottom\.vindex*, where the *ff\-\>bottom\.isceiling* data member should act as an index for *sec\.iboindex*, the developers simply forgot to change *top* to *bottom*\. **Fragment N15** Now it's time to meet the three Barons of Hell\. Let's deal with the first one: ```cpp void TParseContextBase::rValueErrorCheck(const TSourceLoc& loc, const char* op, TIntermTyped* node) { TIntermBinary* binaryNode = node->getAsBinaryNode(); const TIntermSymbol* symNode = node->getAsSymbolNode(); if (!node) return; .... } ``` The analyzer warning: [V595](https://pvs-studio.com/en/docs/warnings/v595/) The 'node' pointer was utilized before it was verified against nullptr\. Check lines: 231, 234\. [ParseContextBase\.cpp 231](https://github.com/ZDoom/gzdoom/blob/g4.11.3/libraries/ZVulkan/src/glslang/glslang/MachineIndependent/ParseContextBase.cpp#L231) The developers here provided for the possibility of a null pointer \(and even remembered to handle it\!\) but dereferenced it before checking\. Bam\! And there we have undefined behavior\. Here are the remaining two Barons of Hell: * V595 The 'linker' pointer was utilized before it was verified against nullptr\. Check lines: 1550, 1552\. ShaderLang\.cpp 1550 * V595 The 'mo' pointer was utilized before it was verified against nullptr\. Check lines: 6358, 6359\. p\_mobj\.cpp 6358 **Fragment N16** The boss of this chapter is the Spider Mastermind, let's take a look at it: ```cpp PClassPointer::PClassPointer(PClass *restrict) : PPointer(restrict->VMType), ClassRestriction(restrict) { if (restrict) mDescriptiveName.Format("ClassPointer<%s>", restrict->TypeName.GetChars()); else mDescriptiveName = "ClassPointer"; loadOp = OP_LP; storeOp = OP_SP; Flags |= TYPE_ClassPointer; mVersion = restrict->VMType->mVersion; } ``` The analyzer warning: [V664](https://pvs-studio.com/en/docs/warnings/v664/) The 'restrict' pointer is being dereferenced on the initialization list before it is verified against null inside the body of the constructor function\. Check lines: 1605, 1607\. [types\.cpp 1605](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/scripting/core/types.cpp#L1605) This is a nice example where the constructor seems to contain a check for dereferencing a null pointer, but the issue is that the dereferencing itself happens earlier — in the member initializer list\. ## Thy Flesh Consumed ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/az6f8860ltx40freh6uw.png) This is the final chapter\. Doomguy is exhausted and bloodied, but the victory is just around the corner\. **Fragment N17** ```cpp bool FScanner::GetFloat (bool evaluate) { .... if(sym && sym->tokenType == TK_IntConst && sym->tokenType != TK_FloatConst) { BigNumber = sym->Number; Number = (int)sym->Number; Float = sym->Float; // String will retain the actual symbol name. return true; } .... } ``` The devs have done everything right here: they've made sure that there's no null pointer\. They've also checked that the token type is *TK\_IntConst*\. And then\.\.\. they check again that the token type is not *TK\_FloatConst*\. Caution is good, but everything should be done in moderation\. In this case, the code just becomes bloated and less readable\. The analyzer issued two warnings: * The analyzer warning: [V590](https://pvs-studio.com/en/docs/warnings/v590/) Consider inspecting this expression\. The expression is excessive or contains a misprint\. [sc\_man\.cpp 829](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/engine/sc_man.cpp#L829) * The analyzer warning: [V560](https://pvs-studio.com/en/docs/warnings/v560/) A part of conditional expression is always true: sym\-\>tokenType \!= TK\_FloatConst\. [sc\_man\.cpp 829](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/engine/sc_man.cpp#L829) I found another similar fragment in the report: * V590 Consider inspecting this expression\. The expression is excessive or contains a misprint\. [sc\_man\.cpp 787](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/engine/sc_man.cpp#L787) After the previous opponents, this one seemed insignificant\. There's more to come, though\. **Fragment N18** Meet one of the strongest demons in the complex\. ```cpp static ASMJIT_INLINE bool X86RAPass_mustConvertSArg(X86RAPass* self, uint32_t dstTypeId, uint32_t srcTypeId) noexcept { bool dstFloatSize = dstTypeId == TypeId::kF32 ? 4 : dstTypeId == TypeId::kF64 ? 8 : 0; bool srcFloatSize = srcTypeId == TypeId::kF32 ? 4 : srcTypeId == TypeId::kF32x1 ? 4 : srcTypeId == TypeId::kF64 ? 8 : srcTypeId == TypeId::kF64x1 ? 8 : 0; if (dstFloatSize && srcFloatSize) return dstFloatSize != srcFloatSize; else return false; } ``` The analyzer warning: [V547](https://pvs-studio.com/en/docs/warnings/v547/) Expression 'dstFloatSize \!= srcFloatSize' is always false\. [x86regalloc\.cpp 1115](https://github.com/ZDoom/gzdoom/blob/g4.11.3/libraries/asmjit/asmjit/x86/x86regalloc.cpp#L1115) Let's get to the bottom of what's going on here\. There is the size check for *dstTypeId* and *srcTypeId* in the function\. The size can be 0, 4, or 8 bytes\. If the size is 4 or 8 bytes, the corresponding *bool* variables are set to *true*\. If it's 0 bytes, they are set to *false*\. Next, if both types are not 0 bytes, we want to know if their sizes are different\. Here, however, instead of comparing the type sizes \(*dstTypeId* and *srcTypeId*\) for inequality, we compare the flags themselves after making sure they are both *true*\. So, we get a different result and function behavior than we expected\. The function always assumes that the sizes of the source and destination types don't match, and compilers [optimize the code](https://godbolt.org/z/Eoeje14TE) so that only the second *return* is left\. <spoiler title="Fun fact"> The attentive reader may notice that this code is from a third\-party library\. So, we didn't want to include it in the article at first\. However, we found something interesting\. In 2017, somebody opened an [issue ](https://github.com/asmjit/asmjit/issues/178) in the asmjit project: the GCC 7\.2 compiler issued a warning to the code above\. The project authors [fixed it](https://github.com/asmjit/asmjit/commit/771d66b301e60ebc3ffa69b11765622c547df6ab): ```cpp static ASMJIT_INLINE bool X86RAPass_mustConvertSArg(X86RAPass* self, uint32_t dstTypeId, uint32_t srcTypeId) noexcept { uint32_t dstFloatSize = dstTypeId == TypeId::kF32 ? 4 : // <= dstTypeId == TypeId::kF64 ? 8 : 0; uint32_t srcFloatSize = srcTypeId == TypeId::kF32 ? 4 : // <= srcTypeId == TypeId::kF32x1 ? 4 : srcTypeId == TypeId::kF64 ? 8 : srcTypeId == TypeId::kF64x1 ? 8 : 0; if (dstFloatSize && srcFloatSize) return dstFloatSize != srcFloatSize; else return false; } ``` The brave marines may have noticed\. Developers [tried](https://github.com/ZDoom/gzdoom/commits/g4.11.3/libraries/asmjit) to update the library before but had to revert the changes\. </spoiler> **Fragment N19** We encounter another enemy in front of the room where the final boss is waiting: ```cpp FString SuggestNewName(const ReverbContainer *env) { char text[32]; size_t len; strncpy(text, env->Name, 31); text[31] = 0; len = strlen(text); .... if (text[len - 1] != ' ' && len < 31) // <= { text[len++] = ' '; } } ``` The analyzer warning: [V781](https://pvs-studio.com/en/docs/warnings/v781/) The value of the 'len' index is checked after it was used\. Perhaps there is a mistake in program logic\. [s\_reverbedit\.cpp 193](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/audio/sound/s_reverbedit.cpp#L193) The code fragment is very similar to the examples with pointer dereferencing before checking\. However, I'd call it an improved version because it's even more sneaky\. This is where an array element is accessed to check that it doesn't contain a space character, and only then it's checked that the index being accessed is correct\. It would be more logical to check the index first, and then check the element using the index\. Then, if the index is incorrect, dereferencing at the specified index will not occur due to [short\-circuit evaluation](https://en.wikipedia.org/wiki/Short-circuit_evaluation) if the logical operator *&&* is present\. Since the max *len* value here is 32, it's not out of bounds, but I'd say it's some demonic pattern again xD\. **Fragment N20** Behold, everybody\! The ultimate final boss of all final bosses\. Multithreading fans, please gather 'round\. Multithreading nonfans, you too\. ```cpp void OpenALSoundRenderer::BackgroundProc() { std::unique_lock<std::mutex> lock(StreamLock); while (!QuitThread.load()) { if (Streams.Size() == 0) { // If there's nothing to play, wait indefinitely. StreamWake.wait(lock); } else { // Else, process all active streams and sleep for 100ms for (size_t i = 0; i < Streams.Size(); i++) Streams[i]->Process(); StreamWake.wait_for(lock, std::chrono::milliseconds(100)); } } } ``` The analyzer warning: [V1089](https://pvs-studio.com/en/docs/warnings/v1089/) Waiting on condition variable without predicate\. A thread can wait indefinitely or experience a spurious wakeup\. Consider passing a predicate as the second argument\. [oalsound\.cpp 927](https://github.com/ZDoom/gzdoom/blob/g4.11.3/src/common/audio/sound/oalsound.cpp#L927) In the code fragment, one of execution threads processes the *Streams* \(consumer\) container\. If another thread hasn't sent data \(producer\), the consumer is told to wait until the producer wakes it up via a conditional variable\. The thread is put to sleep using overloads of the *[std::condition\_variable::wait](https://en.cppreference.com/w/cpp/thread/condition_variable/wait)* and *[std::condition\_variable::wait\_for](https://en.cppreference.com/w/cpp/thread/condition_variable/wait_for)* functions which don't accept the predicate as the second/third argument\. However, conditional variables have a thing called [spurious wakeup](https://en.wikipedia.org/wiki/Spurious_wakeup)\. It means that the producer hasn't yet told the consumers to wake up, but the consumers have awakened\. However, if we look at the code, awakening should occur in the following situations: * The *Streams* container is not empty\. The thread is notified using the *StreamWake* conditional variable\. * The execution of the *BackgroundProc* function must be stopped\. This is notified via the *QuitThread* atomic variable\. Due to spurious wakeup, the stream may come out of sleep when *Streams* is empty\. Then another read of the atomic variable occurs, this time under the full memory barrier \(the *[std::atomic<T\>::load](https://en.cppreference.com/w/cpp/atomic/atomic/load)* overload does this with the default argument\)\. The brave marines made no mistakes here\. However, we can enhance the code: ```cpp void OpenALSoundRenderer::BackgroundProc() { std::unique_lock<std::mutex> lock { StreamLock }; bool repeat = !QuitThread.load(std::memory_order_relaxed); const auto pred = [this, &repeat] { repeat = !QuitThread.load(std::memory_order_relaxed); return !repeat || Streams.Size() != 0; }; while (repeat) { // If there's nothing to play, wait indefinitely. auto cond_met = StreamWake.wait_for(lock, 100ms, pred); if (!cond_met || !repeat) { continue; } // Else, process all active streams for (size_t i = 0; i < Streams.Size(); i++) { Streams[i]->Process(); } } } ``` <spoiler title="If you're wondering what's going on here"> 1. The loop is now executed relative to the *repeat* local variable\. It reflects the state of the *QuitThread* atomic variable \(whether the consumer should be stopped\)\. 1. We have weakened the memory barrier under which the *QuitThread* atomic variable is read\. It's always modified and read under the *StreamLock* mutex, according to the code base\. The mutex itself is a full memory barrier \(*std::memory\_order\_seq\_cst*\), so reading and writing can be done in the *std::memory\_order\_relaxed* mode\. If you don't want the compiler/processor to reorder instructions, you can read in the *std::memory\_order\_acquire* mode and write in the *std::memory\_order\_release* mode\. 1. We added a predicate that checks if the shared data is ready and excludes spurious wakeups\. The predicate inside re\-reads the value of the *QuitThread* atomic variable\. According to the standard, the predicate is executed under locking, so *QuitThread* can be read in the *std::memory\_order\_relaxed* mode\. 1. We left a call to *std::condition\_variable::wait\_for* that puts the consumer on hold until all the data is ready\. To avoid a possible eternal hang, we wake the consumer up every 100 milliseconds\. For example, if someone forgets to call *std::condition\_variable::notify\_\** when setting *QuitThread* to *true*\. 1. If *wait\_for* returns *false*, then no data has been received for the specified time\. Otherwise, double\-check that you have set *QuitThread* to *true* and stop the loop execution\. </spoiler> ## Conclusion Phew\.\.\. What a quest we have survived\. We've met all kinds of demons along the way\. All readers get \+experiences for each of them\. I'd like to end our adventure with a quote from the Necropolis Codex: > Now you can return to work advocate, for now you know why we do this\. Now you can return to work advocating your code from bugs, for now you know that even the code of legendary games that everyone has played can contain various errors\.
anogneva
1,693,994
We value your interest in Write for us or as a guest contributor on this website
We value your interest in Write for us or as a guest contributor on this website. Thank you for...
0
2023-12-11T07:17:00
https://dev.to/blogest123/we-value-your-interest-in-write-for-us-or-as-a-guest-contributor-on-this-website-38e3
We value your interest in Write for us or as a guest contributor on this website. Thank you for visiting Blogest. Let me start by noting that readers find Blogest to be very intriguing since we value quality above quantity. You've come to the right place if you want to establish links to other blogs and contacts with other bloggers. You may increase your real audience, backlinks, Google authority, SERP ranking (SEO), and many other metrics with Blogest's help. Read More - [https://blogest.org/write-for-us/](https://blogest.org/write-for-us/)
blogest123
1,694,051
Essential guide to WebSocket authentication
Authenticating WebSocket connections from the browser is a lot trickier than it should be. Cookie...
0
2023-12-13T14:15:46
https://ably.com/blog/websocket-authentication
webdev, learning, security
Authenticating [WebSocket](https://hubs.la/Q02cBJTh0) connections from the browser is a lot trickier than it should be. Cookie authentication isn’t suitable for every app, and the [WebSocket browser API](https://developer.mozilla.org/en-US/docs/Web/API/WebSocket) makes it impossible to set an `Authorization` header with a token. It’s actually all a bit of a mess! That is the last thing you want to hear when it comes to security, so I’ve done the research to present this tidy list of methods to send credentials from the browser. ## The challenge with WebSocket authentication Even though [WebSocket and HTTP](https://ably.com/topic/websockets-vs-http) are completely separate protocols, every WebSocket connection begins with a HTTP handshake. ![HTTP handshake](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zuznwnsh50cobv2g77bu.png) The [WebSocket specification](https://datatracker.ietf.org/doc/html/rfc6455) does not prescribe any particular way to authenticate a WebSocket connection once it’s established but suggests you authenticate the HTTP handshake before establishing the connection: > This protocol doesn’t prescribe any particular way that servers can authenticate clients during the WebSocket handshake. The WebSocket server can use any client authentication mechanism available to a generic HTTP server, such as cookies or HTTP authentication. HTTP has dedicated fields for authentication (like the `Authorization` header), and you likely have a standard way to authenticate HTTP requests already so this makes a lot of sense on the surface! Surprisingly, though, the WebSocket browser API doesn’t allow you to set arbitrary headers with the HTTP handshake like `Authorization`. 😱 HTTP cookie authentication is an option, but, as you will see in this post, it’s not always suitable, and potentially even vulnerable to [CSRF](https://owasp.org/www-community/attacks/csrf). In this post, I’ll outline your options to work around this remarkable limitation of modern browsers to securely and reliably send credentials to the server. ## Authentication methods for securing WebSocket connections ### Send access token in the query parameter One of the simplest methods to pass credentials from the client to a WebSocket server is to pass the access token via the URL like this: > wss://website.com?token=your_token_here Then, on the server, you can authenticate the request. Here’s an example with Node: ```javascript import { createServer } from 'http' import { WebSocketServer } from 'ws' import { parse } from 'url' const PORT = 8000 const server = createServer() // noSever: Tells WebSocketServer not to create an HTTP server // but to instead handle upgrade requests from the existing // server (above). const wsServer = new WebSocketServer({ noServer: true }) const authenticate = request => { const { token } = parse(request.url, true).query // TODO: Actually authenticate token if (token === "abc") { return true } } server.on('upgrade', (request, socket, head) => { const authed = authenticate(request) if (!authed) { // \r\n\r\n: These are control characters used in HTTP to // denote the end of the HTTP headers section. socket.write('HTTP/1.1 401 Unauthorized\r\n\r\n') socket.destroy() return } wsServer.handleUpgrade(request, socket, head, connection => { // Manually emit the 'connection' event on a WebSocket // server (we subscribe to this event below). wsServer.emit('connection', connection, request) }) }) wsServer.on('connection', connection => { console.log('connection') connection.on('message', bytes => { // %s: Convert the bytes (buffer) into a string using // utf-8 encoding. console.log('received %s', bytes) }) }) server.listen(PORT, () => console.log(`Server started on port ${PORT}`)) ``` If the token is valid, you move ahead with the upgrade. Otherwise, send a standard 401 “Unauthorized” response code and close the underlying socket. This approach is easy to reason about and it only takes a few lines of code to implement on the client. The downside we need to explore is the security implication of encoding the token in the query string in this way. Some developers on public forums will argue this isn’t so bad: - When you use TLS (WSS), query strings are encrypted in transit. - Compared to HTTP, WebSocket URLs aren’t really exposed to the user. Users can't bookmark or copy-and-paste them. This minimizes the risk of accidental sharing. However, it is important to acknowledge query parameters will still show up in plaintext on the server where they will likely get logged. Even if your code doesn’t, the framework or cloud host [likely will](https://owasp.org/www-community/vulnerabilities/Information_exposure_through_query_strings_in_url). This is precarious because logs leak can error messages, for example ([accidental information disclosure](https://owasp.org/www-project-mobile-top-10/2014-risks/m4-unintended-data-leakage)). Should a malicious actor attain access to the logs, they would have access to all the data and functionality that the user behind the WebSocket connection has. In the next section, let’s explore an evolution of this method that’s more secure, albeit more work to implement. ### Send an ephemeral access token in the query parameter As we covered in the section above, sending your main access token in the query parameter is not sufficiently secure because it might be logged in plaintext on the server. To dramatically reduce the risk, we could use the main access token to request an ephemeral single-use token from an authentication service then send that short-lived token the query parameter. This way, by the time the token is logged on the server, it will likely be useless since it will either have already been used or expired. The basic flow can be illustrated like this: ![basic flow illustrated](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nqe7srkdgegsf822l9kj.png) While inherently more secure than sending the main access token, you now need to implement a custom and stateful authentication service specifically for WebSockets, which is a pretty significant downside considering we first started exploring sending the token in the query parameter because of its convenience! ### Send access token over WebSocket Another option to authenticate a WebSocket connection is to send credentials in the first message post-connection. The server must then validate the token before allowing the client to do anything else. Here’s a contrived server implementation I wrote with Node to illustrate the model: ```javascript import { WebSocketServer } from 'ws' import { createServer } from 'http' import { randomUUID } from 'crypto' const server = createServer() const wsServer = new WebSocketServer({ server }) const PORT = 8000 const connections = {} const authenticate = token => { // TODO: Actually authenticate token if (token === "abc") { return true } } const handleMessage = (bytes, uuid) => { const message = JSON.parse(bytes.toString()) const connection = connections[uuid] if (message.type === "authenticate") { connection.authenticated = authenticate(message.token) return } if (connection.authenticated) { // Process the message } else { connection.terminate() } } const handleClose = uuid => delete connections[uuid] wsServer.on('connection', (connection, request) => { const uuid = randomUUID() connections[uuid] = connection connection.on('message', message => handleMessage(message, uuid)) connection.on('close', () => handleClose(uuid)) }); server.listen(PORT, () => console.log(`Server started on port ${PORT}`)) ``` If the token is invalid, the server terminates the connection. Otherwise, the it tracks the connection as “authenticated” and processes subsequent messages. When implemented correctly, this method is completely secure, however, it involves defining your own custom authentication mechanism protocol securely and correctly. Sending credentials over WebSockets in this way has some other downsides: - **Implementing a custom stateful protocol will increase complexity.** The example above looks simple but in reality you now need to manage session lifetimes, handle synchronization issues when you [scale](https://hubs.la/Q02cBKFf0), and deal with potential inconsistencies. Should something go wrong, your users might not be able to login or, worse yet, you might introduce a security vulnerability. - **You become vulnerable to DOS attacks.** With this method, anyone can open a WebSocket connection. An attacker might open a bunch of WebSocket connections and refuse to authenticate, tying up server resources like memory indefinitely, potentially overloading your server until it becomes sluggish. To counteract this, you’ll need to implement rigorous timeouts, further contributing to the complexity compared to HTTP-based authentication methods. ### Send credentials in a HTTP cookie The WebSocket handshake is done with a standard HTTP request and response cycle, which supports cookies, allowing you to authenticate the request. Authentication using cookies has been widely adopted since the early days of the internet. It's a reliable method that offers security you can trust. However, there are some limitations you should be aware of: - **Not suitable if your WebSocket server is hosted on a different domain.** If your WebSocket server is on a different domain than your web app, the browser will not send the authentication cookies to the WebSocket server, which makes the authentication fail. - **Vulnerable to CSRF.** The browser does not enforce a Same-Origin Policy for the WebSocket handshake like it would an ordinary HTTP request. A malicious website badwebsite.com could open a connection to yourwebsite.com and the browser will happily send along the authentication cookie, creating an opportunity for badwebsite.com to send and receive messages on the user’s behalf, unbeknown to them. To circumvent this, it’s pivotal the server checks the `Origin` header of the request before allowing it. Alternatively, you may choose to implement a CSRF token. ### Send credentials with the Sec-WebSocket-Protocol header While the WebSocket browser API doesn’t let you set arbitrary headers like `Authorization`, it does allow you to set a value for the `Sec-WebSocket-Protocol` header, creating an opportunity to smuggle the token in the request header! In vanilla JavaScript, the client WebSocket code might look like this: ```javascript const ws = new WebSocket( "wss://example.com/path", ["Authorization", "your_token_here"] ) ``` And with a library like [React useWebSocket](https://github.com/robtaussig/react-use-websocket), something like this: ```javascript const { sendMessage, lastMessage } = useWebSocket("wss://example.com/path", { protocols: ["Authorization", "your_token_here"] }) ``` The `Sec-WebSocket-Protocol` header is designed to negotiate a subprotocol not carry authentication information but some developers including me and [those behind Kubernetes](https://github.com/kubernetes/kubernetes/commit/714f97d7baf4975ad3aa47735a868a81a984d1f0) are asking “why not?" You might be wondering what the downside of this neat workaround is. Every option in this list so far has a downside, and setting `Sec-WebSocket-Protocol` is no exception: - The token might get logged in plaintext on the server. Because `Sec-WebSocket-Protocol` is not designed to carry authentication tokens, they may end up in log files unintentionally as part of standard logging of WebSocket protocol negotiation, thus causing potential security risks. - You might experience unexpected behavior. It’s also important to acknowledge that such use of `Sec-WebSocket-Protocol` isn't standardized, meaning libraries, tooling, and middleware might not handle this kind of logic gracefully. For simple apps, this is unlikely to cause a problem, however, in a sufficiently complex system with multiple components, this could cause unexpected behavior including security issues. ### Send credentials with basic access authentication Some posts out there suggest an outdated trick whereby you encode the username and password in the WebSocket URL: ```javascript const ws = new WebSocket("wss://username:password@example.com") ``` Under the hood, the browser will pull these out to add a basic auhtenticaiton access header. > Authorization: Basic dXNlcm5hbWU6cGFzc3dvcmQ== Apart from the fact that basic authentication is limited to a username and password (and you probably want to send a token), this method has always suffered from inconsistent browser support and is now [totally deprecated](https://developer.mozilla.org/en-US/docs/Web/HTTP/Authentication#access_using_credentials_in_the_url). Don’t use it. I’m only including a brief note about it here for completeness. ### Forget about WebSocket authentication (mostly) with Ably So far we’ve weighed the benefits and limitations of each approach however, you could just use a library that handles it all for you under the hood. With [Ably](https://hubs.la/Q02cBLtl0), authentication is a solved problem. Ably is a realtime infrastructure API that makes it trivial to add realtime features to your apps compared to if you used WebSockets directly. ![Ably realtime infrastructure API](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6f7mjdcgd6msyq51dw4.png) Apart from being easy to get started with, Ably handles authentication for you in a secure way, allowing you to focus on the features that really matter to your users. Instead of deliberating the best authentication mechanism and burdening all that responsibility even if you’re not a security expert, Ably provides you with “white spots” to fill in with code that connects to your database to authenticate the user. The token management part, including refresh tokens and permissions, is all handled for you. Learn more about how Ably can help you build impressive realtime features at scale [here](https://hubs.la/Q02cBLtl0) or [create a free account](https://hubs.la/Q02cBLHm0) and play around for yourself. ## Conclusion In this post, we explored several methods to send credentials from the web browser to a WebSocket server. It would have been nice if I could recommend one go-to method but, as will now be evident to you, each method has advantages and disadvantages that must be considered relative to your project. Here’s a quick summary for future reference: - **Query parameter:** Sending the credentials in a query parameter is dead easy, but it introduces a risk of the credentials being logged in plaintext on the server, and that could be risky! A homemade authentication service that issues ephemeral tokens for use in the query parameter would improve the security greatly, however, for many, that is a bridge too far. - **WebSocket connection:** Sending the credentials over the WebSocket connection is worth considering, however, you usually end up implementing your own authentication protocol that is finicky to maintain, potentially vulnerable to DOS attacks, and doesn’t play well with anything else. - **Cookies:** Cookie authentication is attractive due to its reliability and ease of implementation. However, it may not be compatible with your system design. - **Sec-WebSocket-Protocol:** Smuggling the token in the `Sec-WebSocket-Protocol` is a stroke of genius, and, if it’s good enough for Kubernetes, it might be good enough for you! At the same time, misusing the header in this way might lead to unexpected behavior. User [BatteryAcid](https://stackoverflow.com/questions/4361173/http-headers-in-websockets-client-api#comment73692893_35890141) on StackOverflow summed it up pretty well when they wrote - _"I implemented this and it works - just feels weird. thanks"_ 😂 Of course, if this all sounds like a headache, you might consider [Ably](https://ably.com). Apart from solving the authentication problem, Ably provides additional features you’d need to implement on top of WebSockets like [Presence](https://hubs.la/Q02cBLSw0) and [message queues](https://hubs.la/Q02cBM6L0), and provides production guarantees that will be time-consuming or costly to achieve on your own like 99.999% uptime guarantee, exactly-once delivery, and guaranteed message ordering.
bookercodes
1,694,173
How to Resolve QuickBooks Error 12007?
QuickBooks Desktop has brought an unprecedented boom to the accounting industry by emphasizing growth...
0
2023-12-11T10:44:51
https://dev.to/axpertaccounting/how-to-resolve-quickbooks-error-12007-1oo
quickbooquickbookserror12007, axpertaccounting
QuickBooks Desktop has brought an unprecedented boom to the accounting industry by emphasizing growth and efficiency across any business. However, what you should not ignore are the countless errors that sometimes prevent you from working efficiently. The special feature of these QuickBooks errors is that each error can be fixed by implementing several troubleshooting methods. One of the errors discussed in this article is [QuickBooks Error 12007](https://www.axpertaccounting.com/quickbooks-error-12007/). This error often occurs while downloading payroll or updating QuickBooks Desktop software. This error can also occur if the software is unable to connect to the internet. Additionally, this error can also be caused by issues with certain browsers, antivirus, or firewalls. This error may seem daunting and out of your control, but our team of experts has identified the various causes that cause this particular error and the various troubles you can implement to easily resolve this error. We've created a comprehensive guide to help you understand shooting strategies. If you are curious about QuickBooks Payroll Error 12007 and would like to learn more, stay tuned to this section to learn more about this technical issue and its troubleshooting options. If you don't want to spend time fixing QuickBooks error 12007 manually, we are here to help. Don't even think about contacting our hotline, our US-based accounting experts. Please contact us at 1-888-351-0999. We are committed to providing the best service tailored to your needs. **Signs and Symptoms of QuickBooks Error 12007** Some common signs that users can attribute to such an error include: • A user is unable to update his QuickBooks and instead receives an error message related to error 12007. • System crashes frequently. • Especially when updating QuickBooks. • Unexpected system delays occur when running QuickBooks. **What causes QuickBooks error code 12007?** Possible causes for this error are: • Internet Security and Firewall are blocking his QuickBooks. • This error can also occur with Internet Explorer browsers that are not the default browser. • Third-party programs manipulate his QuickBooks features. • QuickBooks updates prior to be incomplete. • Connection settings are incorrect. • SSL settings are incorrect. Solutions to Resolve QuickBooks Error Code 12007 Solution 1: Add QuickBooks desktop as an exception in the Firewall Solution 2: Clear SSL state Solution 3: Make Internet Explorer the default browser Solution 4: Open Windows in Safe Mode Solution 5: Reset the Update settings Solution 6: Fix Advanced Connection Settings Solution 7: Reset Internet Settings Read also:- [Fix QuickBooks Payroll Update Error 15243](https://www.axpertaccounting.com/quickbooks-error-15243/) Conclusion: QuickBooks error code 12007 can be easily fixed by implementing the above solutions. If you still can't resolve the error successfully, don't worry, our team of experts is here to help. Please feel free to contact our technical support hotline. To contact the QuickBooks technical support team, please call 1-888-351-0999. Our US-based accounting and bookkeeping experts will find the most appropriate and practical solution for you. We are available 24/7 to help you with QB-related questions and technical issues.
axpertaccounting
1,694,288
The History of State Management at CodeSandbox
At CodeSandbox, we run your code in our cloud infrastructure, configure the environment for you and...
0
2023-12-11T13:35:41
https://codesandbox.io/blog/the-history-of-state-management-at-codesandbox
react, redux, learning, showdev
**At CodeSandbox, we run your code in our cloud infrastructure, configure the environment for you and keep your code always ready, behind a shareable URL. Give it a try with [this Next.js example](/p/sandbox/next-js-fxis37?file=/pages/index.tsx) or [import your GitHub repo](/dashboard?import_repo=true)!** --- ## CodeSandbox, the application CodeSandbox provides a [cloud development environment](https://codesandbox.io/cloud-development-environments) with a powerful microVM infrastructure, supported by several services and an API. This enables developers all around the world to collaborate and build products together. At the very front, we have the web application, where it all comes together and ignites the CodeSandbox experience. CodeSandbox is not your typical web application. There is surprisingly little traditional data fetching—we only do server-side rendering for SEO purposes and there is only a single page, the editor. This reduces a lot of complexity in developing a web application, but looking at the editor you would quickly label it as a complex piece of software. In reality, this complexity comes from the amount of state and management of that state to create the experience of CodeSandbox. We are about to embark on our fifth iteration of state management. Since its birth 7 years ago the web ecosystem has had big and small influences. In parallel with building our experiences, we also continuously discuss and reflect on these ecosystem influences and evaluate how they can benefit us. The ultimate goal for us is to use tools that allow us to continue adding new experiences with as little friction as possible. So as we start this fifth iteration of state management, we have a golden opportunity to reflect on our previous iterations. ## It all started with… At the inception of the CodeSandbox application, [Redux](https://redux.js.org/) was the big hype. Redux enabled an important capability in terms of state management: it exposes state to any component in the component tree without the performance issues of using a React context. It does this by allowing you to select what state should cause reconciliation of the consuming component. In other words, Redux gave CodeSandbox a global state store where the current user, the current sandbox, the live session state, the layout state, etc. could live and be accessed by any component. With a high degree of state used across components in deeply nested component trees, Redux solved the most critical aspect of managing the state of CodeSandbox. That said, as CodeSandbox grew it had two problems: understanding how the application works and it had subpar performance. As an example, when you loaded a sandbox the application would need to run an asynchronous flow that included _21 state updates_ and _9 effects_. With Redux this flow was expressed across 10 different action, reducer, and component files. So as a developer, asking yourself the question “What happens when we load a sandbox?”, it was incredibly difficult to infer. The performance issues we faced with Redux are really the same performance issues you have with React in general. Even though Redux allowed us to narrow down what state components require, it is impossible for a human being to infer how the scope of the state will affect the component reconciliation performance as a whole. ## Getting insight This led us to our second iteration: [Cerebral JS](https://cerebraljs.com/). Cerebral would solve both of these problems. By using a sequence API for our complex asynchronous flows of state updates and effects, which also includes a visual development tool for those sequences, we had more insight and understanding of how the application works. Even though new developers would need to learn an API, that learning curve was much smaller. Also, it ensured everyone had the exact same mental model and a good understanding of what happens “when a Sandbox loads”. <figure> <img src="https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3ihiu7950jd19pmu8inu.png" alt="The Cerebral JS devtools give deep insight into the state management of the application" style="width:100%"> <figcaption align = "center">The Cerebral JS devtools give deep insight into the state management of the application.</figcaption> </figure> The performance gains came from us combining Cerebral with [Mobx](https://mobx.js.org/README.html). Now our components would automatically observe any state they accessed as opposed to us manually trying to optimize state access with selectors. In the following presentation you can learn more about the journey of Cerebral JS and how my own journey crossed paths with CodeSandbox. {% embed https://www.youtube.com/watch?v=uni-dG6-Rq8 %} <figure> <figcaption align = "center">A presentation at React Finland about Cerebral JS and using it for CodeSandbox.</figcaption> </figure> ## I hate TypeScript, I love TypeScript As time passed, [TypeScript](https://www.typescriptlang.org/) came on the scene. Its promise of painless refactors and reduced risk of regressions was something we desperately needed. As CodeSandbox grew we were reluctant to change existing code. We were always on high alert after deploying changes and “We did not test it it well enough!” became a point of cultural friction for us. Cerebral JS, with its exotic declarative sequence API, would never work with TypeScript. That fact, combined with the introduction of [Proxies](https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Proxy) to the language, gave the perfect excuse to throw another state management library into the ecosystem. [Overmind](https://overmindjs.org/) was born and it was built from the ground up to be a spiritual successor to Cerebral. It took the same conceptual approach, but with an API that was type friendly and still with just as much insight using its development tools. {% embed https://www.youtube.com/watch?v=EzJT0ICufas %} <figure> <figcaption align = "center">A presentation at a meetup in Oslo about how to think about application development separate from user interfaces.</figcaption> </figure> At this point, it would seem that all our challenges were solved. We had great insight into our application, we did not worry about performance, adding new features was straightforward, and with TypeScript we gained more confidence to refactor and deploy our code. But one day we wanted to make TypeScript even stricter, so we turned on [strictNullChecks](https://www.typescriptlang.org/tsconfig#strictNullChecks). ## Being yelled at by TypeScript Strict null checking is a feature of TypeScript that gives an error if you try to access a value that might be `null` or `undefined`. Without this feature, the following does not give an error: ```tsx type State = { // The project state might not be initialized project?: { title: string } } const state: State = {} state.project.title // This does not give any error ``` As we turned on the feature, TypeScript got very upset with us, to say the least. We had so many errors that we decided not to do it. But it was not the number of errors that demotivated us, it was the realization of an underlying message from TypeScript: “You developers have a lot more context about the code than I do”. This sounds a bit abstract and cryptic, but it is an important point. When you work with global state stores you are in a global context that is initialized before the rest of the application. That means any state that is lazily initialized through data fetching or other mechanisms has to be defined as “possibly there”. This makes sense from the perspective of the global state store, but from the perspective of the consumers of that global state store, there is no guarantee that the state you access has been initialized or not. In practice, that means whenever you consume lazily-initialized state from a global state store you have to verify: ```tsx type State = { project?: { title: string } } const state: State = {} if (state.project) { state.project.title } ``` When you are writing code in a context where you know that the `project` must have been initialized, for example in a nested component or an action, you have to write additional code to explain TypeScript that this is indeed a valid context to access the `project`. And in a complex application like CodeSandbox, this happens _everywhere_. You want TypeScript to have a deep understanding of your codebase and help you, but in this situation, you have more understanding of the codebase and need to help TypeScript. Something is not right in the world. ## Putting context into state Around this time, CodeSandbox had become a company and we were planning, unknowingly at the time, our move into the [Cloud Development Environment](csb.new) space. In the web ecosystem, state machines were the big thing and we also experimented with this: {% embed https://www.youtube.com/watch?v=ul_3ABrpj64 %} Using a state machine would help us make our code more aware of the context it runs in, and also be more explicit about what contexts it *can* run in. This experimentation held a lot of promise, at least in theory, and we decided to build our new editor using React primitives with some patterns and minor abstractions to embrace this concept. Even though this fourth state management iteration did improve the challenge we just discussed, it came short on some of the already solved challenges with our previous iterations. Specifically, we had challenges with performance again as we were relying on React contexts to share state. Also, we were used to imperative actions, which were now replaced with reducers and state transition effects. This split our logic into different files and created indirection. In practice, it became very difficult to reason about “What happens when this action is dispatched?”. Just the fact that you could not `CMD + click` an action dispatch in the code and go to its implementation, reading line by line what it does, became a big friction for us. Exploring React contexts for state management was still an important experience. Instead of thinking state trees, we were now thinking and practicing hooks composition for our state management. And with the advent of suspense, error boundaries and new data fetching patterns in React, it all culminated in our fifth iteration. ## Putting state into contexts With our fifth approach, we embrace the fact that React has contexts to share state management across components, allowing us to initialize state closer to where it is used and take advantage of React data fetching patterns. The only real problem with contexts is their performance. Even with projects like [Forget](https://dev.to/usulpro/how-react-forget-will-make-react-usememo-and-usecallback-hooks-absolutely-redundant-4l68), contexts will remain a bottleneck given that any state change within a context will cause reconciliation to all consumers of that context. Without going into all the bells and whistles of this iteration, you can imagine a hook that does some state management: ```jsx export const useSomeStateManagement = () => { const [count, setCount] = useState(0) useEffect(() => console.log("Increased count to", count), [count]) return { count, increaseCount() { setCount(count++) } } } ``` If you wanted that state management to be shared by multiple components, you would expose the hook in a context. But for that to run optimally you would have to: ```jsx export const useSomeStateManagement = () => { const [count, setCount] = useState(0) useEffect(() => console.log("Increased count to", count), [count]) const increaseCount = useCallback(() => { setCount(current => current + 1) }, []) return useMemo({ count, increaseCount }, []) } ``` And even then any consumer of this context would reconcile whenever any state within the context changes, regardless of what state it actually accesses from the context. With [Impact](https://github.com/christianalfoni/impact) we create a reactive context instead, using reactive primitives: ```jsx export const useSomeStateManagement = context(() => { const count = signal(0) effect(() => console.log("Increased count to", count.value)) return { get count() { return count.value }, increaseCount() { count.value++ } } }) ``` It does not matter how much state you put into this context—any consuming component reconciles based on what _signals_ they access, regardless of context. You also avoid memoization, dependency arrays and the reconciliation loop altogether. But most importantly, you can use React data fetching patterns and mount these contexts with an initialized state. An example from our exploration is how we mount a `BranchContext` within a `SessionContext`: ```tsx // We pass props coming from routing to mount our BranchContext export function BranchContext({ owner, repo, branch, workspaceId }) { // This component lives within our SessionContext, where we can // fetch new branches and initialize Pitcher (Our VM process) const { branches, pitcher } = useSessionContext(); // As we need the branch data first we use suspense and the new "use" hook // to ensure that it is available before the component continues rendering. // A parent Suspense boundary takes care of the loading UI const branchData = use(branches.fetch({ owner, repo, branch, workspaceId })); // Since Pitcher has progress events while resolving its promise, we consume // the observable promise, which is just a promise in a signal, to evaluate // its status const { clientPromise, progress } = pitcher.initialize(branchData.id); if (clientPromise.status === "rejected") { throw clientPromise.reason; } if (clientPromise.status === "pending") { return ( <h4> {progress.message} ({progress.progress}) </h4> ); } // We mount the nested context for the branch, giving it the already fetched // data and connected Pitcher instance. Any consuming component/context will // safely be able to access the initialized data and Pitcher API without any // additional type checks return ( <useBranchContext.Provider branchData={branchData} pitcher={clientPromise.value} > <Branch /> </useBranchContext.Provider> ); } ``` Signals and observability is not a new thing, but it has a bit of a renaissance these days. For example, [Solid JS](https://www.solidjs.com/), which is a great contribution to the ecosystem, binds its signals to the elements created, being “surgical” about its updates. With Impact and React, the observability is tied to the component. The drawback of that is that you are not as “surgically” updating the actual element bound to a signal, but you keep your control flow in the language. What that means is that there are no special components for lists, switch and if statements. No risk of challenges with typing or language features like destructuring creating unexpected behavior. You can keep your mental model of “It’s just JavaScript”, which I personally favor. And as components only reconcile based on signals accessed, it is a huge performance boost regardless. We have been running an [Engineer @ work](https://www.youtube.com/@CodeSandbox/streams) stream where Danilo, Alex and I have been exploring these concepts and building prototypes to see how state management through reactive contexts would work for us. The Impact project is also available on [GitHub](https://github.com/christianalfoni/impact) if you want to follow its progress. There are many kinds of applications you can build and they all have a mix of common and unique challenges. This article and the Impact project are about a very specific challenge for the complexity of state management required in the CodeSandbox editor. We are not aiming for a silver bullet, but something that works for our use case. Maybe you have similar challenges and if not, I hope this article leaves you with some thoughts for reflection. Thanks for reading!
christianalfoni
1,695,577
Embrace Opportunities: Say Yes to Yourself! 🌟🙌
A post by Arowolo Wahab Abiodun
0
2023-12-12T11:11:15
https://dev.to/abbeycity500/embrace-opportunities-say-yes-to-yourself-1m52
selfdevelopment
abbeycity500
1,696,555
Choisir le Meilleur Résines CBD sans THC : Bienfaits et Recommandations
Le cannabidiol (CBD) est de plus en plus reconnu pour ses bienfaits potentiels sur la santé, tels que...
0
2023-12-13T09:00:25
https://dev.to/originecbd/choisir-le-meilleur-resines-cbd-sans-thc-bienfaits-et-recommandations-23mn
Le cannabidiol (CBD) est de plus en plus reconnu pour ses bienfaits potentiels sur la santé, tels que la gestion du stress, le soulagement de la douleur, et la réduction de l'anxiété. Pour ceux qui souhaitent profiter des avantages du CBD sans les effets psychoactifs du tétrahydrocannabinol (THC), choisir la [meilleure résines CBD sans THC](https://originecbd.fr/84-resines-cbd-sans-thc) devient essentiel. Voici un guide complet pour vous aider dans cette démarche. **Bienfaits du CBD sans THC :** Le CBD est l'un des nombreux composés actifs présents dans le cannabis, mais contrairement au THC, il n'a pas d'effets psychotropes. Opter pour une [résines CBD sans THC](https://originecbd.fr/84-resines-cbd-sans-thc) permet de profiter des bienfaits médicinaux du CBD sans craindre les effets euphoriques associés au THC. Parmi les avantages potentiels, on compte la réduction du stress, le soulagement de la douleur chronique, et le soutien à la santé mentale. **Conseils pour Choisir la Meilleure Résine de CBD sans THC :** [Origine du CBD](https://originecbd.fr/) : Privilégiez des produits dérivés du CBD issus de cultures de chanvre biologique. Une source de confiance réduit les risques de contaminants nocifs. Spectre complet ou isolat : Choisissez entre le CBD à spectre complet, qui contient d'autres cannabinoïdes, terpènes et flavonoïdes, ou l'isolat de CBD, qui est pur à 100 %. La décision dépend de vos préférences et besoins personnels. Méthode d'extraction : Vérifiez la méthode d'extraction du CBD. Les méthodes propres, telles que l'extraction au CO2, préservent la pureté du CBD sans laisser de résidus indésirables. Analyse tierce : Assurez-vous que le produit a été soumis à des tests par des laboratoires tiers indépendants. Ces analyses garantissent la conformité du produit, notamment l'absence de THC, et fournissent des informations détaillées sur la composition chimique. **Recommandations de Résines de CBD sans THC :** CBDPure : Réputé pour ses produits de haute qualité, CBDPure propose des résines de CBD sans THC provenant de cultures biologiques. Elixinol : Elixinol offre une gamme variée de [résines CBD de qualité](https://originecbd.fr/13-resines-cbd), toutes testées par des tiers pour assurer leur pureté. Endoca : Entreprise axée sur la durabilité, Endoca propose des produits biologiques, y compris des résines de CBD sans THC, avec des résultats d'analyse accessibles en ligne. En conclusion, le choix de la meilleure résine de CBD sans THC dépend de vos besoins individuels et de la qualité du produit. Avant d'acheter, prenez le temps de faire des recherches approfondies sur les marques réputées pour vous assurer une expérience positive et bénéfique avec le CBD.
originecbd
1,699,590
How To Delete Old Image While Updating The Post
The Laravel project can go and run well without touching the older files. For this, you will use the...
0
2023-12-16T06:06:18
https://dev.to/webfuelcode/how-to-delete-old-image-while-updating-the-post-4cn4
laravel, tutorial, php
The Laravel project can go and run well without touching the older files. For this, you will use the simple update function and update with the new text entered by the user. The problem may occur when you grow and have thousands of images and files that are not in use. The site will take time to load and at this time we know the power of fast-loading websites and how it impacts user performance. ## Check if there is a file If there is already a file we will remove it and update it with a new one. Or we can just leave to keep the older if the post has one. ``` if ($request->hasFile('image')) { $oldfile = public_path('images/post_img/') . $post->image; $filename = 'image' . '_' .time() . '.' . $request->file('image')->getClientOriginalExtension(); if(File::exists($oldfile)){ File::delete($oldfile); } ``` ## Full update function Complete the update function by validating the title, description, category id, and image field. ``` use File; public function update(Request $request, Post $post) { $validatedData = $this->validate($request, [ 'title' => 'required', 'category_id' => 'required', 'description' => 'required', 'image' => 'sometimes|mimes:jpeg,jpg,gif,png' ]); if ($request->hasFile('image')) { $oldfile = public_path('images/post_img/') . $post->image; $filename = 'image' . '_' .time() . '.' . $request->file('image')->getClientOriginalExtension(); if(File::exists($oldfile)){ File::delete($oldfile); } $request->file('image')->storeAs('post_img', $filename, 'public'); $validatedData['image'] = $filename; } $post->update($validatedData); return redirect()->back()->withMessage('Your updated post is ready now!'); } ``` [Post: How To Delete Old File While Uploading New](https://codeweb.wall-spot.com/how-to-delete-old-file-while-uploading-new/)
webfuelcode
1,700,292
11 AI Libraries To Make You A Coding Wizard In 2024
Hey there, coding wizard in the making! Want to add some AI magic to your projects? There are some...
0
2023-12-17T12:19:34
https://learnn.cc/blogs/11-ai-libraries-to-make-you-a-coding-wizard-in-2024
ai, webdev, python, library
Hey there, coding wizard in the making! Want to add some AI magic to your projects? There are some amazing AI libraries out there that can make you look like a machine learning master. 🧙‍♂️In this post, we'll introduce you to 11 of the best AI libraries that every developer should know about. Whether you want to build a chatbot, detect images, translate text, or something else altogether, these libraries have you covered. 🪄 With just a few lines of code, you'll be conjuring up AI models and apps like a pro. For example, say you want to build a cat photo classifier. Just grab a dataset of cat photos, use a library like TensorFlow or PyTorch, and voila! You'll have a working model in no time. Or if natural language processing is more your thing, check out NLTK or SpaCy to build your own chatbot. There are multiple things, so let's dive in. ## 🪄 Abracadabra! An Intro to AI Libraries for Wannabe Wizards 🧙‍♂️ Want to wield the power of AI? These libraries are your magic wand! PyTorch and TensorFlow are powerful frameworks used by pros, but also ideal for newbies. Keras and Scikit-learn have simple APIs to get you casting spells in no time. Once you've mastered the basics, Flask and Django will let you deploy your own AI apps and share your sorcery with Muggles. For computer vision, OpenCV has a spell for everything from face detection to optical flow. Natural language processing? Check out [NLTK](https://www.nltk.org/), SpaCy and [Gensim](https://pypi.org/project/gensim/). They'll have you charming sentences and understanding languages in the blink of an eye. With the help of these libraries, you'll be well on your way to becoming an AI wizard! The magic is in your hands - now go out there and create something magical! ## 1. 📚 Tensorflow - The OG Spellbook for Deep Learning Magic ![](https://res.cloudinary.com/rahulism/image/upload/v1702809721/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_2_lenfkz.png) Want to wield the power of AI? You'll need to master Tensorflow, Google's open-source library for machine learning. This tool lets you build neural networks with just a few spells, I mean, lines of code. 🧙‍♀️ [Tensorflow](https://www.tensorflow.org/) handles all the heavy lifting so you can focus on crafting your model. Define layers, activate functions, loss metrics - and Tensorflow builds the computational graph under the hood. Then simply call .fit() and your neural net will start learning! With a flick of your wand (ok, running the model), Tensorflow uses backpropagation to tweak parameters and make predictions. Whether you want to detect objects in images, translate between languages, or anything else, Tensorflow has a spell for that. Boasting a huge community and constant updates from Google, Tensorflow is the gold standard for deep learning. Start practicing Tensorflow today! ## 2. 🤖 privateGPT - Securely interact privately with PDFs ![](https://res.cloudinary.com/rahulism/image/upload/v1702809827/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_3_mgtt1g.png) Ever wanted to have a private conversation with your PDFs? 🤫 [privateGPT](https://github.com/imartinez/privateGPT) lets you do just that! This nifty AI library lets you securely interact with your PDF docs offline. 🧑‍💻 Say you have a pile of work docs, privateGPT enables you to ask questions about them and get responses without sharing anything externally. 🤐 Your data stays private and under your control. You can delete anything at any time. With privateGPT, you're the wizard in control of your PDFs. 🧙‍♂️Summon knowledge from them by asking questions. Get insights into your docs like never before! Want to try your magic? privateGPT is open source and free to use. Install it locally and you'll be casting spells on your PDFs in no time! Your documents will come alive before your eyes. 👀 Who knew you could have a private conversation with PDFs? Now you do, thanks to the privateGPT library - your key to unlocking knowledge in your docs! 🔑 ## 3. 🧠 PyTorch - Flexible Sorcery for Neural Network Conjuring ![](https://res.cloudinary.com/rahulism/image/upload/v1702809864/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_4_fnxsjs.png) PyTorch is a popular open-source machine learning library based on Torch, used for applications such as computer vision and natural language processing. It’s known for being more flexible and easier to debug than some other frameworks. You can easily build neural networks with a few lines of code, and it has a huge community of developers creating tutorials and sharing projects. [PyTorch](https://pytorch.org/) uses dynamic computation graphs, so you can modify your networks on the fly. This is super useful when you’re experimenting and debugging your models. You can add or remove layers, change hyperparameters, switch between CPUs and GPUs, etc. PyTorch is also great for research since it allows you to test new ideas very quickly. Some of the biggest tech companies using PyTorch include Facebook, Twitter, NVIDIA, FastAI, and Allen Institute for AI. --- ![Free Developer Resources](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phzpv4npxf4nhpqroxto.png) Hey Developer, I have compiled a list of 1200+ [Free Developer Resources](learnn.cc). Thanks for stopping by. --- ## 4.📕LangChain ![](https://res.cloudinary.com/rahulism/image/upload/v1702809945/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_5_tfzazd.png) LangChain is like a magic wand 🪄 for AI devs. It lets you build apps using huge language models with barely any code! You can make chatbots, question answering systems, summarizers, and more. [LangChain](https://www.langchain.com/) is free for anyone to use and modify. Devs of any level can get started quickly by choosing from pretrained models like GPT-2, BERT, and XLNet. Just pick a model, add your data, and poof! You'll have an AI-powered app in no time. For example, say you want to make a chatbot. You'd select a model like GPT-2, feed it a dataset of sample conversations, and LangChain will do the rest. Your chatbot will be able to understand questions and respond naturally without any complex coding! Or if you want to build a news summarizer, just pick a summarization model like BART and feed it a bunch of articles. Like magic, it'll generate concise yet coherent summaries on demand. The possibilities are endless with LangChain. It's the perfect tool for devs who want to do amazing things with AI and NLP without becoming full-fledged wizards. ## 5. 🗣️ NLTK - Teaching Machines to Speak Like Us Humans ![](https://res.cloudinary.com/rahulism/image/upload/v1702809947/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_6_srij8j.png) The [NLTK](https://www.nltk.org/) library is your BFF if you want to build chatbots or anything involving natural language processing. 🤖 It has a bunch of datasets, tools, and algorithms to teach your program to understand and generate human language. With NLTK, you can: - Tokenize sentences into words - Remove stopwords like "the", "a", "is" - Stem words to their root (e.g. "fishing" -> "fish") - Tag parts of speech (noun, verb, adjective) - Build statistical language models - Analyze sentiment and subjectivity of text NLTK is super easy to use and has great documentation to get started. 🧙‍♂️ You'll be chatting with your AI in no time! Some examples to get you going: ```python import nltk from nltk.stem import PorterStemmer stemmer = PorterStemmer() print(stemmer.stem('fishing')) # Output: 'fish' print(nltk.word_tokenize("Hello, how are you!")) # Output: ['Hello', ',', 'how', 'are', 'you', '!'] ``` With NLTK in your toolbelt, you'll be well on your way to building magical AI systems that can speak like humans. ## 6. 🎨 SwirlSearch ![](https://res.cloudinary.com/rahulism/image/upload/v1702810047/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_7_siwitk.png) [SwirlSearch](https://github.com/swirlai/swirl-search) is an AI-powered search engine that makes researching a breeze. This open-source tool searches multiple sources at once, so you'll get results from websites, images, videos, and more - all in one place! Swirl uses natural language processing to understand what you're really looking for. So when you search for something broad like "how to code," Swirl will suggest more specific queries to narrow down results, like "how to code in Python" or "JavaScript tutorial for beginners." Once you start searching, Swirl’s AI will analyze the content and generate insights to highlight key points. This makes it easy to skim articles and find the most useful information quickly. You'll be coding like a wizard in no time! 🧙‍♂️ SwirlSearch is designed for enterprise use, but anyone can tap into its AI power. The open-source platform has SDKs for Python, Node.js, and Java so you can build Swirl into your own applications. Or just use the Swirl API to add smart search to your website or app. With SwirlSearch, searching the web has never been so simple or magical. 🪄 This AI-fueled tool will change the way you discover and understand information online. Your research skills will be swish and flick level in no time! ## 7. 🤖 Pezzo ![](https://res.cloudinary.com/rahulism/image/upload/v1702810048/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_8_ygzfbs.png) Have an AI model you've been dying to deploy? Pezzo makes it a breeze. 🧑‍💻This open-source library lets you manage and run your AI apps with just a couple lines of code. Pezzo streamlines the entire AI dev process, from training your model to tracking how it performs in the real world. You'll get detailed insights into costs, accuracy, and more with their easy-to-use monitoring tools. Say you built an AI assistant named Claude(🗿). 🤖 With Pezzo, you could deploy Claude, check how users respond to him, and make any needed tweaks to improve his skills, all from one place. [Pezzo](https://pezzo.ai/) simplifies AI in a big way. ## 8. 🧑‍✈️Copilotkit.ai ![](https://res.cloudinary.com/rahulism/image/upload/v1702810135/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_9_zugwqk.png) [Copilotkit.ai](https://copilotkit.ai/) is one handy library for building conversational AI into your apps.🤖 It lets you easily add chatbots and smart textareas to your product. 💬 Copilotkit serves as a bridge between your AI assistant (the copilot) and your app. It allows useful info to be shared between them, so your copilot has the context it needs. Your users will love how smart your app has become! For example, say you're building a travel app. You can use Copilotkit to add a chatbot that suggests destinations based on the user's interests. Or build a smart textarea that offers flight recommendations as the user types. The best part? Copilotkit is open source, so you can get started for free. ## 9. 📈 Scikit-Learn - Predicting the Future With ML Divination ![](https://res.cloudinary.com/rahulism/image/upload/v1702810138/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_10_ftrqyn.png) [Scikit-learn](https://scikit-learn.org/stable/) is like a magic 🔮 crystal ball for developers. This popular ML library lets you see into the future by building predictive models! Want to know if a customer will churn or how many widgets you'll sell next quarter? Scikit-learn has you covered. It's a one-stop shop for classification, regression, clustering, dimensionality reduction, and more. You'll feel like a wizard conjuring up random forests, SVMs, and neural nets with just a few lines of Python code. No arcane incantations required - just clearly documented functions and a simple, consistent API. Whether you're a ML newbie or expert, scikit-learn makes it easy to tap into the power of predictive analytics and gain data-driven insights. Give it a whirl and you'll be divining insights in no time! ## 10.🏆 Weaviate ![](https://res.cloudinary.com/rahulism/image/upload/v1702810212/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_11_bjgrp1.png) [Weaviate](https://weaviate.io/) is an open-source vector database that lets you store data objects and vector embeddings so you can search based on similarity. This means you can explore unstructured data and find related info fast. Weaviate supports searching both vectors and the original data, so you get the best of both worlds! You can filter structured data while also getting recommendations based on vectors. This allows for some really powerful AI applications. For example, say you have a dataset with movies, actors, directors, genres, etc. You could find similar movies based on a director’s style or an actor’s filmography. You could also find actors and directors related to a particular genre. The possibilities are endless! Weaviate is a great choice if you want to build a lightning-fast search engine, recommendation system, or any other AI-powered app. ## 11. 🔍 Tavily - GPT Researcher ![](https://res.cloudinary.com/rahulism/image/upload/v1702810212/Learnn%20Assets%20--%20Blog%20Images/11%20AI%20Libraries%20to%20Make%20You%20a%20Coding%20Wizard/frame_safari_dark_11_bjgrp1.png) [Tavily's GPT Researcher](https://github.com/assafelovic/gpt-researcher) is your personal AI research assistant. This little guy will search the entire internet for you and find the most useful information on any topic. He's trained on huge datasets so he knows exactly what info you need. Just give GPT Researcher a search query and he'll get to work. In seconds, he'll provide summaries, key facts, examples, statistics, quotes, and more. He can even generate full research reports, presentations, and essays on the fly. 🪄 This AI wizard does in minutes what would take a human hours. For example, if you ask GPT Researcher about the latest trends in AI, he may discover that "reinforcement learning" and "deep neural networks" are hot topics. He can quickly explain what they are, how they work, and give code samples to get you started. Researching with GPT Researcher is fast, easy and fun. He brings the magic of AI to your fingertips so you can stop searching and start building! This little library is a must-have for any developer. ## Conclusion So there you have it, 11 AI libraries that will transform you into an AI coding wizard in no time. 🧙‍♂️ With tools like TensorFlow, PyTorch, and Keras at your fingertips, you'll be building neural networks and training models before you know it. And libraries like SciKit-Learn and Pandas make machine learning accessible even if you're just getting started with data science. What are you waiting for? Pick a library, find some tutorials, and start building something cool! You could create an image classifier, a recommendation engine, a chatbot. ---------- Thanks for reading. Let's stay connected! You can also follow me on [Twitter](http://twitter.com/rahul_wip), [Instagram](http://instagram.com/rahul_hsl), and [LinkedIn](https://www.linkedin.com/in/rahulbiz/). If you enjoyed this post, share it with your network and leave your thoughts below, and follow for more cool content. Also, you can always [Buy Me A Coffee](http://buymeacoffee.com/rahuldotbiz).
rahxuls
1,703,308
AI in 2024: Art Thrives, Open-Source Battles GPT
If only there were a crystal ball with a chatbot inside. *ChatGPT: Tell us what will happen next in...
0
2023-12-20T06:16:17
https://dev.to/mindsdb/ai-in-2024-art-thrives-open-source-battles-gpt-2nl6
gpt, sql, bot, ai
If only there were a crystal ball with a chatbot inside. **ChatGPT: Tell us what will happen next in AI. ** Will we all be texting telepathically? Popping popcorn and watching AI-generated movies? (A Marvel movie director says that’ll happen in 2025). The next best thing to a crystal ball: Polling AI founders, industry experts, and analysts about what they think is in store for 2024. TL;DR: **For humanity:** AI will unleash tremendous creativity but can’t create masterpieces on its own. Early adopters will manage emails with AI; employers will use it to give customer service a boost. **For AI practitioners**: Open-source AI models and tools will gain ground against GPT. Many AI infrastructure startups, tools, and solutions will emerge to support AI deployment. Companies will integrate AI more strategically integrated with products & services. Here’s what else our experts see around the corner. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j5yhs1iuuzec79kzub0g.jpg) Navin Chaddha, Managing Partner, Mayfield Mayfield is a global venture capital firm with $3 billion under management. 1. Will AI replace knowledge workers in 2024? Nope. “The future lies in the synergy between humans and AI, enhancing human capabilities through advanced, conversational interactions and cognitive assistance,” Navin Chaddha says. “This collaboration will lead to a concept we refer to as ‘Human Squared,’ where AI acts not just as a tool, but as a teammate, multiplying our own abilities.” 2. Generative AI’s infrastructure will take a big leap. “We anticipate significant advancements in startups focusing on gen-AI infrastructure layers, akin to the evolution seen in web, mobile, and cloud technologies,” he says. Mayfield calls this infrastructure the ‘cognitive plumbing of GenAI,’ and it will consist of four crucial layers: → models/middleware/tools → data infrastructure/operations → infrastructure software/XaaS → semiconductors and systems 3. AI will become integral in applications. This phenomenon mirrors the necessity of having a website or mobile app today, Chaddha says. The shift is not just about cost-cutting, but about enabling new capabilities previously beyond human reach. “At the base, we'll see growth in semiconductors and systems, with companies innovating beyond current technologies like Nvidia's GPU and AI processors. The next layer involves infrastructure enhancements, tackling challenges in AI security, networking, and storage. Above this, the data infrastructure and operations layer is critical for running AI effectively. Lastly, the top layer, comprising models, middleware, and developer tools, will see innovation, enabling easier integration and utilization of AI in various services.” 4. A new crop of founders will tackle complex problems and embark on the long journey of company building. “These entrepreneurs, dedicated to addressing significant challenges, will find a receptive audience among investors eager to support them from inception to iconic,” says Chaddha. “A focus will be on founders with a deep commitment to their vision and values, coupled with the emotional intelligence to lead and inspire their teams.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5kogjq2943zcmint715u.jpg) Eric Buatois, General Partner, Benhamou Global Ventures BGV is a Palo Alto-based early-stage venture capital firm focused on global, human-centric tech. 5. Creatives will get a big boost from AI. Especially those making video games, music, even opera, Eric Buatois says. “The creative people are going to have a blast.” But can AI generate a full-length movie? Not quite. Same for painted masterpieces. “Art is about emotion. I’m not so sure that AI can create an emotion like that yet.” 6. LLMs will get smaller and more specialized. “You can’t boil the universe,” Buatois says. “In many cases what’s needed is something that’s more optimized for one field, like an LLM for biochemistry, materials, biology etc.” 7. LLMs will move from non-essential to mission-critical. Companies will go far beyond running entertaining experiments in 2024, Buatois says. “As a result, we’ll expect LLMs to be faster and more reliable. You’ll see big improvements in reliability, response time, and cost.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xipg8eht93i263mn40nw.jpg) Didier Lopes, Founder and CEO, OpenBB OpenBB is an open-source fintech company that integrates dozens of data vendors into a single fully customizable platform. 8. AI movies: Not quite yet. “AI won’t be creating whole hour-and-a-half-long movies in 2024, but it will be creating plenty of backgrounds and scenes. Plus also a lot of short 5-minute films,” says Didier Lopes. 9. AI for email? Yes, for early adopters. “Early adopters will have an AI-powered system for managing all their emails,” he says. That includes organizing, sorting, reading, and responding to them. (Although Lopes won't be an early adopter here. “I like to have control and know about each context I have with any individual since that can build a stronger relationship over time.”) 10. AI for coding: Absolutely. “AI for coding is not just about tests anymore,” Lopes says. “Look at Cursor. I use it by default now and get the first iteration of code 60-70% done through it, and then through prompt engineering I can squeeze a few other 10-20%. Then I adapt the last 10-20% to my particular use case. This should be the norm, and people who don't adapt are actually losing out on productivity.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jbjy433pbm9pb7kppmbt.jpg) Richard Socher, CEO, you.com, & Founder, AIX Ventures Socher is former Chief Scientist and EVP at Salesforce, where he led teams working on fundamental research, applied research, product incubation, search, and other areas. 11. Open-source AI models will catch up to GPT-4. “Companies will eventually use LLM operating systems that help you get these (open-source) models production-ready.” 12. AI-generated videos will get longer. “They’ll eventually really solve the temporal consistency of multiple characters,” Socher said in a tweet. 13. LLMs will graduate to becoming agents. “These agents will have more powerful tools, search, APIs, coding abilities, clicking on the web, etc. In particular, AI assistants for search are helpful enough and will replace Google for many young people, students and knowledge workers.” 14. AI music generation? Yes. “The first vibe album may get released where an artist gives a general sequence of vibes. The exact lyrics will be personalized. Before that, we'll have more artists use and collaborate with AI to create new songs.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tkegxux8fz4xrucrx073.jpg) Brendan Burke, Senior Emerging Technology Analyst, PitchBook Burke contributes to the firm’s emerging technology research, covering AI, information security and IoT. 15. Open-source AI agent projects = big businesses. “We are just starting to learn how to use function calls to LLMs to complete complex tasks,” Brendan Burke says. “Experimentation with agents is producing some of the fastest growing open-source projects of all time. These experiments will evolve into reliable applications next year.” 16. Domain-specific models will take a leap ahead. “Right now, we are talking about AGI efforts and performance on academic tasks. By next year, individual professions will be comparing new models in their domains from specialized training labs.” 17. AI directing money flow? Yes. “AI will absolutely change capital allocation decisions. Banks and investors are at the leading edge of generative AI experimentation and have the ability to dramatically disrupt their investment processes to evaluate more companies with richer context. This may be a way for financial services firms to stand out in performance and innovation.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jg3tzer53vm70t34bjy7.jpg) Adam Carrigan, Co-Founder & COO, MindsDB MindsDB is an end-to-end AI platform for developers. Carrigan is a former management consultant at Deloitte, a University of Cambridge grad, and a YCombinator and UC Berkeley SkyDeck alum. 18. LLMs will have a big impact on customer service. “There’s a ton of repetitive work in this area, from help desks to frequently asked questions about products," Adam Carrigan says. "LLMs can really improve the experience here, and there will be quicker and better outcomes for everybody – including end users and companies.” 19. Open-source LLMs will shine. “Yes, there’s been a lot of excitement around OpenAI’s models. But in 2024, open-source models will come to the forefront. They’re cheaper, they give companies more control, and using them means you’re no longer beholden to a large organization like OpenAI.” 20. GPT wrappers are goners. “A whole crop of startups were built around improving on OpenAI. But if they don’t have a true moat, their businesses stand to get gobbled up by OpenAI itself.” 21. AI music finally gets good? Maybe. “There have been some hits this year (“Heart on My Sleeve” by an AI mashup of Drake and The Weeknd) and clear misses (think Anna Indiana). But I’m hopeful that 2024 is the year that we can tune into AI-generated music and actually enjoy it.” 22. AI hardware: Not yet. “The Humane AI pin launch was interesting, but 2024 is too soon for AI devices. Early adopters may try out the pin or other gadgets, but this won’t take off yet in a mainstream way.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lqn21mdrflq70j87wnh1.jpg) Zoya Khan, CEO and Co-Founder, AfterWork Khan is a serial entrepreneur, angel investor, and former venture capital analyst. 23. Human interaction will be fully back, as more people automate repeatable tasks like scheduling calls or appointments. “People want to get back to interacting with each other and don’t want to be bogged down by all the logistical challenges of planning,” Zoya Khan says. “So I believe AI will be used in creative ways to allow people to connect without the logistical headaches.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/sk8gbuy69lf7dnuhmtbd.jpg) Divyansh Garg, AI founder and researcher 24. Skynet family reunion? Garg predicts: “By early 2025, the number of AI agents active at a moment will exceed the population of humans on Earth.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9wqh0frshks766ubrgk1.jpg) Lior Sinclair, AI/ML Engineer and Founder of AlphaSignal AlphaSignal is one of the best-read technical newsletters for engineers and researchers. 25. An open-source model will beat GPT-4 in 2024. 26. LLMs will become smaller. (Much smaller.) Phi2 is a good example. 27. LLMs will understand math and physics. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cxi290u4ffz6dkkyq17n.jpg) Lindsey Gamble, Associate Director of Influencer Innovation, Mavrck Gamble was named a Top Creator Economy & Influencer Marketing Expert by Business Insider. 28. AI will be the key way that creators scale. “With AI, creators will leverage tools to transform their content into different formats across various social media platforms, such as turning long-form videos on YouTube into multiple short-form videos for YouTube Shorts, Instagram Reels, and TikTok, among others,” Lindsey Gamble says. 29. AI dubbing tools will take off. “AI dubbing tools that let creators put out their versions of their videos in different languages will have the most impact by allowing them to tap into audiences they previously couldn’t reach due to the language barrier. This is something that only the biggest creators had the resources to do previously, but with YouTube and other standalone companies building out tools like this, it will create more of an equal playing field for creators to build and grow their audiences and monetize them while reaching fans across the globe.” 30. Get ready for a wave of digital doppelgangers. “More creators will create AI versions of themselves, ranging from chatbots to avatars, built on their existing content,” says Gamble. “Chatbots will be the main entry point for creators, especially B2B creators.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/0ttqvtawhxz9smzjarc9.jpg) Tim Ruscica, Developer, Educator, and Influencer Ruscica's @TechWithTim YouTube channel has more than 1.39 million subscribers. 31. AI won’t replace programmers. “Yes, AI is advancing very quickly, and surely it will affect the productivity and demand for conventional developers. However, I don't see it replacing them,” Tim Ruscica says. Companies still need developers, they need people that have the knowledge to understand, put in context and implement solutions provided by AI. AI is already changing how developers work but those developers are the ones that can use it most effectively and gain the most benefit from it.” 32. AI-assisted writing will go mainstream. “People have relied on auto-complete and spell check for a very long time already, and now AI is great at augmenting writing, especially with extensions and plugins for things like email. I think it will be more common for people to simply rely on it for almost all of their writing.” I can see a world where professionals merely list a few points in notes and have AI fill in the rest. (Leading many of us to wonder whether we are reading or responding to AI-generated content.) I think this will vastly reduce the literacy barrier for many people, especially in nations where English is not their first language.” 33. We’ll develop a new appreciation for art. “At the same time, I do believe that with more AI-generated content around, we will slowly learn to appreciate other forms of art, music and creativity produced by humans. Many people will resist reliance on technology for creative pursuits and strive more and more to do things ‘naturally’.” ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f389sjljslit0nnrlk45.jpg) Kirk Borne, Founder, Data Leadership Group Borne is a data scientist, influencer, speaker, and consultant. 34. 2024 should be about the “why” of AI. “As incredible as large language models (LLMs) have been in 2023, the focus on ‘what’ these are will diminish as the focus increases on the more strategic business questions: ‘why’ and ‘where’ should we be using these AI developments within our organization, in a manner consistent with our business goals, mission, culture, and go-to-market strategies." 35. AI will be more strategically integrated. “I believe we will see more AI enablement within existing business tools and processes, with less concern about figuring out how to do tactical stand-alone deployments of the latest, greatest hyped-up AI tools. I also believe we will see greater appreciation of data strategy (data quality, metadata labeling, data integration, data workflow orchestration, data-enablement of AI deployments) within the realm of AI implementations, since AI not only consumes data, but AI devours data. Consequently, good data in, good AI results out — versus the undesirable alternative.” 36. Personal LLMs are what’s next. “We first saw LLMs tackle worldwide content on the Internet. We next saw LLMs focus on on-prem private enterprise knowledge bases and data sources. I now want to see my own personal, privacy-protecting, smart LLM deployed on my own data sources (email, personal computer, My Documents, search histories, Internet favorites, workflows, data handling tasks, etc.)”
mindsdbteam
1,707,832
Python: the making of Secret Santa
Let's practice with a basic Secret Santa generator in Python. What is Secret...
8,625
2023-12-25T12:45:50
https://dev.to/spo0q/python-the-making-of-secret-santa-5h2i
python, programming, beginners
Let's practice with a basic Secret Santa generator in Python. ## What is Secret Santa? Secret Santa is a very popular Christmas tradition. The idea is to exchange very cheap but funny gifts with your colleagues or your friends. There are three mandatory rules: - each member is assigned to another member **randomly** - each member will buy a gift and receive one - a member cannot select his own name ## Avoid common misconceptions You don't need an even number of participants to organize a Secret Santa. You might get confused, as the whole operation consists of making pairs, but the size of _the stack_ does not matter here :) ## Map the problem Let's say there are seven members for the Secret Santa Event: A, B, C, D, E, F, and G. The trivial example would be: ``` A->B->C->D->E->F->G->A ``` Looks easy! We can probably assign pairs according to an ordered list of elements to make pairs in the same order. However, anyone could easily guess all pairs, ruining the whole experience. Let's shuffle! ## Basic Python program **Warning**: we must implement the mandatory rules. It's possible, at least, in four steps: 1. take the names as inputs 2. store the names in a list 3. shuffle the list 4. assign unique pairs We will process a `.csv` file that contains two different lists of members: ``` John,Sarah,Paul,Jenny,Oliver,Maria,Elliot Joyce,Helena,Ella,Robert,Marla,Eva,Kate ``` Python will abstract the complexity. The built-in `enumerate()` will keep a count of iterations so we'll know when to stop: ```Python #!/usr/bin/env python3 import sys import random import csv """Generate secret santa pairs from a list of members """ def make_secret_santa_pairs(members): total = len(members) random.shuffle(members) pairs = [ (member, members[(j + 1) % total]) for j, member in enumerate(members) ] print("[+]", pairs) try: with open("members.csv", mode="r") as csv_file: csv_items = csv.reader(csv_file) for item in csv_items: if len(item) > 0: make_secret_santa_pairs(item) except Exception as e: print("[-] Error while setting members with the .csv file:", e) sys.exit(1) ``` _N.B.: I use a .csv here because I'm lazy, but feel free to replace it with a prompt or something more interactive_ ## What to do after * send invitations to each member by email * provide user accounts * add wish lists * add an interface to enter names instead of using .csv files And many more... In this post, we barely scratched the surface. ## Wrap this up Such loops and permutations are not uncommon in programming, so it can be nice to practice with a concrete case.
spo0q
1,722,795
Build a 3D Earth Globe Model in Three.js (PS: It's easier than you think) 🌏😱
Introduction &amp; Demo Hop on a fascinating tutorial as I guide you through the...
0
2024-01-10T05:04:48
https://dev.to/arjuncodess/build-a-3d-earth-globe-model-in-threejs-ps-its-easier-than-you-think-2pod
webdev, javascript, beginners, tutorial
## Introduction & Demo Hop on a fascinating tutorial as I guide you through the surprisingly simple process of building a stunning 3D Earth Globe Model using Three.js. To begin with, I will explain what WebGL & Three.js is and then we will proceed to the build. [This](https://earth-globe.vercel.app/) is what you will build. Find the source code [here](https://github.com/ArjunCodess/earth-globe-threejs). Let’s get started! ## What is WebGL? WebGL is a JavaScript Graphics API that renders high-quality interactive 3D and 2D graphics. It can be simply used in HTML <canvas> elements. ## What is Three.js? Three.js is a JavaScript library that is used for creating and displaying 3D graphics in a "compatible" web browser. It uses WebGL, which is a low-level graphics API. ## File Structure ```txt ---- textures |---- earthCloud.png |---- earthbump.jpeg |---- earthclouds_8k.jpeg |---- earthmap.jpeg |---- earthmap_clouds.jpeg |---- earthmap_night.jpeg |---- galaxy.png ---- index.html ---- style.css ---- three.js ---- main.js ``` Download the /texture files through [this](https://github.com/ArjunCodess/earth-globe-threejs/tree/main/texture) link. ## Code ### /index.html This HTML code is used to set up the environment for our Three.js Earth Globe Model. It defines the document structure and links external resources such as style sheets and JavaScript files. The canvas element with the id 'globe' is where our 3D model will be drawn. The order of script inclusion is important, making sure that ‘three.js’ is loaded before our main logic in 'main.js'. ```HTML <!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>Earth Globe Model With Three.js</title> <link rel="stylesheet" href="style.css"> </head> <body> <canvas id="globe"></canvas> <script src="three.js"></script> <script src="main.js"></script> </body> </html> ``` ### /style.css Styling for the element with id 'globe' which is our canvas: fixed position, aligned to the top-left corner of the screen. ```css #globe { position: fixed; top: 0; left: 0; } ``` ### /three.js Just head over to [build/three.js](https://threejs.org/build/three.js) and copy/paste the code in this file. No observations here. ### /main.js The main coding starts here. Let's go! #### Step 1 Create a main function. ``` js function main() {} ``` #### Step 2 We now establish the core components: scene, renderer, and camera. Firstly, we create a new THREE.Scene() to serve as the container for all our 3D elements. Next, the THREE.WebGLRenderer() is configured to render our scene onto our HTML canvas which we defined earlier with the ID "globe". The renderer's size is then set to match the window dimensions using renderer.setSize(window.innerWidth, window.innerHeight). Lastly, a THREE.PerspectiveCamera() is initiated. This camera is pivotal for defining our view of the scene. The parameters include a 45-degree vertical field of view, the aspect ratio based on the window dimensions, and the near and far clipping planes (0.1 and 1000, respectively). This determines what is visible within the camera's view. Set the camera's position along the z-axis to 1.7. The scene is now set. ```js const scene = new THREE.Scene(); const renderer = new THREE.WebGLRenderer({ canvas: document.querySelector('#globe') }); renderer.setSize(window.innerWidth, window.innerHeight); const camera = new THREE.PerspectiveCamera(45, window.innerWidth / window.innerHeight, 0.1, 1000); camera.position.z = 1.7; ``` #### Step 3 Let's get into the Earth itself. We start by defining the Earth's geometry using THREE.SphereGeometry(), setting the radius to 0.5 and specifying the number of horizontal and vertical segments (32, 32 for a better appearance). Next, Initialise the Earth's material with THREE.MeshPhongMaterial(). This material includes textures. The map property loads the Earth's surface texture, while bumpMap integrates a bump map for added depth. Adjust the bumpScale parameter to control the intensity of the bumps. Finally, we combine the geometry and material to create a mesh, which forms our 3D Earth model. This mesh is added to the previously made scene. ```js const earthGeometry = new THREE.SphereGeometry(0.5, 32, 32); const earthMaterial = new THREE.MeshPhongMaterial({ map: new THREE.TextureLoader().load('texture/earthmap.jpeg'), bumpMap: new THREE.TextureLoader().load('texture/earthbump.jpeg'), bumpScale: 1, }); const earthMesh = new THREE.Mesh(earthGeometry, earthMaterial); scene.add(earthMesh); ``` #### Step 4 Now, let’s light up our globe by adding some lighting effects. We create a variable pointLight with new THREE.PointLight(). This light source emits white light (0xffffff) with an intensity of 5 and a range of 4 units. Adjust these parameters based on your desired lighting effects. The light’s position is set using pointLight.position.set(1, 0.3, 1), specifying the x, y, and z coordinates. This determines where the light originates. Finally, the pointLight is added to the scene. ```js const pointLight = new THREE.PointLight(0xffffff, 5, 4); pointLight.position.set(1,0.3,1); scene.add(pointLight); ``` #### Step 5 Let’s make our globe more realistic by adding clouds. We start by defining the geometry of the cloud using THREE.SphereGeometry() with a slightly larger radius (0.52) than that of the Earth itself to enhance the visual effect of the cloud layer. The number of horizontal and vertical segments remains at 32 for smoothness. The cloud material is created with new THREE.MeshPhongMaterial(). The map property loads the cloud texture, and we set transparent: true to ensure that clouds do not obstruct the view of Earth below. Now, we combine the geometry and material to form a cloud mesh. This mesh is then added to the scene which gives our Earth a realistic view. ```js const cloudGeometry = new THREE.SphereGeometry(0.52, 32, 32); const cloudMaterial = new THREE.MeshPhongMaterial({ map: new THREE.TextureLoader().load('texture/earthCloud.png'), transparent: true }); const cloudMesh = new THREE.Mesh(cloudGeometry, cloudMaterial); scene.add(cloudMesh); ``` #### Step 6 To create a celestial atmosphere, we will introduce a star-ry background texture. We define the star geometry using THREE.SphereGeometry() and give it a large radius (5) to cover the entire scene. The increased number of horizontal and vertical segments (64, 64) ensures that the starry sky is well detailed. The star material is created with THREE.MeshBasicMaterial(). The map property loads a texture representing a galaxy. THREE.BackSide() setting ensures that the material is applied to the inner side of the sphere, creating an environment that surrounds our scene. In the end, we combine the geometry and material to form the star mesh. This mesh is then added to the scene. ```js const starGeometry = new THREE.SphereGeometry(5, 64, 64); const starMaterial = new THREE.MeshBasicMaterial({ map: new THREE.TextureLoader().load('texture/galaxy.png'), side: THREE.BackSide }); const starMesh = new THREE.Mesh(starGeometry, starMaterial); scene.add(starMesh); ``` #### Step 7 We now declare and initialize variables outside of the main function that will help us in handling interaction with the user and controlling the rotation of the globe. targetRotationX and targetRotationY: These variables store the initial rotation values around the X and Y axes [control the rotation of the Earth Globe Model in the render function]. mouseX and mouseY: These variables store the current mouse coordinates on the screen. mouseXOnMouseDown and mouseYOnMouseDown: These variables store the mouse coordinates at the moment when the user starts dragging windowHalfX and windowHalfY: These represent half of the window's width and height. dragFactor: This determines how sensitive is rotation to mouse dragging. A smaller value makes it more sensitive while a larger value reduces sensitivity. ```js let targetRotationX = 0.005; let targetRotationY = 0.002; let mouseX = 0, mouseXOnMouseDown = 0, mouseY = 0, mouseYOnMouseDown = 0; const windowHalfX = window.innerWidth / 2; const windowHalfY = window.innerHeight / 2; const dragFactor = 0.0002; ``` #### Step 8 Define functions to render and animate our Globe. The render function updates the rotation of both the Earth mesh and the cloud mesh based on targetRotationX and targetRotationY values. The animate function uses requestAnimationFrame() to create a continuous animation loop. It calls the render function within each frame, updating the display and creating a smooth animation effect. Finally, we call the animate function to start the animation loop. ```js const render = () => { earthMesh.rotateOnWorldAxis(new THREE.Vector3(0, 1, 0), targetRotationX); earthMesh.rotateOnWorldAxis(new THREE.Vector3(1, 0, 0), targetRotationY); cloudMesh.rotateOnWorldAxis(new THREE.Vector3(0, 1, 0), targetRotationX); cloudMesh.rotateOnWorldAxis(new THREE.Vector3(1, 0, 0), targetRotationY); renderer.render(scene, camera); } const animate = () => { requestAnimationFrame(animate); render(); } animate(); ``` #### Step 9 Let's now define a set of functions that handle mouse events for interaction with the Globe (basically rotation). onDocumentMouseDown: This function is called when the mouse button is pressed. It prevents the default behavior of the event, adds event listeners for mouse move and mouse up, and records the initial mouse position. onDocumentMouseMove: This function is called when the mouse is moved. It updates the current mouse position and calculates the rotation based on (currentPosition - initialPosition). onDocumentMouseUp: This function is called when the mouse button is released. It removes the event listeners for mouse move and mouse up, which indicates that the interaction has ended. ```js function onDocumentMouseDown(event) { event.preventDefault(); document.addEventListener('mousemove', onDocumentMouseMove, false); document.addEventListener('mouseup', onDocumentMouseUp, false); mouseXOnMouseDown = event.clientX - windowHalfX; mouseYOnMouseDown = event.clientY - windowHalfY; } function onDocumentMouseMove(event) { mouseX = event.clientX - windowHalfX; targetRotationX = (mouseX - mouseXOnMouseDown) * dragFactor; mouseY = event.clientY - windowHalfY; targetRotationY = (mouseY - mouseYOnMouseDown) * dragFactor; } function onDocumentMouseUp(event) { document.removeEventListener('mousemove', onDocumentMouseMove, false); document.removeEventListener('mouseup', onDocumentMouseUp, false); } ``` #### Step 10 Add an Event listener to the document for the mousedown event, which calls the onDocumentMouseDown function when the user presses the mouse button and starts the Globe rotation. When the window finishes loading, execute the 'main' function. ```js // last line of the main function document.addEventListener('mousedown', onDocumentMouseDown, false); // outside of the main function window.onload = main; ``` ## Conclusion It is a thrilling experience to make an Earth Globe Model with Three.js. Be creative and keep experimenting with new things! 🌐✨ Happy Coding! 🚀 Thanks for 11175!
arjuncodess
1,723,060
SC900 Dumps
Practice Time Management: SC900 Dumps to practice time management during the exam. Simulate the exam...
0
2024-01-10T11:03:01
https://dev.to/sumblefew1966/sc900-dumps-3n4o
Practice Time Management: <a href="https://shorturl.at/begx5">SC900 Dumps</a> to practice time management during the exam. Simulate the exam conditions as closely as possible, adhering to the time constraints for each section. This helps in refining your strategy for the actual exam day. Verify Answers and Seek Clarifications: In case of uncertainty or discrepancies in dump answers, take the time to verify the information. Utilize online forums or consult with peers and mentors to seek clarifications. It's essential to have accurate information for a solid exam preparation. Conclusion: Mastering the SC900 exam is a <a href="https://shorturl.at/begx5">SC900 Dumps</a> significant achievement for anyone aspiring to establish a career in cybersecurity. SC900 dumps, when used strategically, can enhance your preparation and increase the likelihood of success. However, it's crucial to approach their use with a balanced perspective, combining them with official resources and hands-on experience. As you embark on your SC900 certification journey, remember that true mastery comes from a comprehensive understanding of the concepts and a commitment to continuous learning in the dynamic field of cybersecurity. Click Here More Info: https://examtopicsfree.com/microsoft-exam-dumps/sc900-dumps/
sumblefew1966
1,723,065
Mobile App Vs Web App: Which Is Best in 2024?
If you want to create an app and you're trying to choose between building a web app or a mobile app,...
0
2024-01-10T11:06:36
https://dev.to/sparkouttech/mobile-app-vs-web-app-which-is-best-in-2024-3ab8
webdev, javascript, beginners, programming
If you want to create an app and you're trying to choose between building a web app or a mobile app, you're in the right place.This article offers a clear definition of each of them, along with the main differences, pros, and cons of web apps and mobile apps. It will also give you a definitive answer on what type of app is best for your project. **What is a web app?** A web application is a software application that is accessed through a web browser. Web applications are built using web technologies, such as HTML, CSS, JavaScript, React, Python, etc. They can be accessed from any device with an internet connection and do not require installation. A web application's code is stored on a remote server, accessed by a web browser (e.g., Google Chrome, Firefox, or Safari) and delivered to the user when the user enters the application's URL.The terms web application and website are sometimes used interchangeably, but typically a web application refers to a website with a high level of interactivity, as opposed to a simple static content site. Google Docs and Canva are two examples of web applications: interactive browser-based sites.This also includes Progressive Web Apps, which offer greater functionality and a mobile-like experience, while still running in the browser.Medium is an example of a progressive web app.To learn more about Progressive Web Apps, check out our ultimate guide. **What is a mobile app?** A mobile app is a software application that runs on the operating system of a mobile device, such as Android OS or iOS.The code for mobile apps is downloaded directly to the user's device, rather than being hosted remotely and accessed through a browser. This allows mobile apps to work without an internet connection (although some require connectivity for certain functions).John Varvatos' Shopping App: An Example of a Mobile AppMobile apps can take different forms: native, hybrid, and cross-platform. Native apps are coded using programming languages native to specific operating systems, such as Swift or Kotlin for iOS, and Java or Kotlin for Android. Hybrid and cross-platform applications use a combination of different frameworks, often including some web technologies such as HTML and JavaScript.‍‍Pros and Cons of Mobile Apps vs. Web Apps Here's a brief breakdown of the pros and cons of building mobile apps and web apps, from a business standpoint. **Benefits of Web Applications** Easy to develop and implement.Work on any platform with an internet browser and an active internet connection (desktop, laptop, mobile devices).Easier and cheaper to upgrade and maintain.[Web application development services](https://www.sparkouttech.com/web-application-development/) experience is easier to find than mobile developers.Cons of Web AppsThey don't offer an optimal user experience for mobile users. Web apps tend to run slow on mobile devices.It may not be as secure as mobile apps.Engagement and retention are lower than with mobile apps.Advantages of Mobile AppsDeliver an easy-to-use, engaging, and immersive experience on mobile.It can provide offline functionality.You can take advantage of mobile device features such as GPS, camera, etc.It allows businesses to send push notifications to app users across devices.Achieve higher engagement and retention.It can be published and promoted on the Apple App Store and Google Play Store. **Cons of Mobile Apps** It can be difficult to develop.Mobile app development is often expensive and time-consuming.More difficult and expensive to maintain.Native mobile apps require separate builds to serve different platforms/operating systems. **Key Points of the Difference Between a Web App and a Mobile App** Let's dig a little deeper into the pros and cons mentioned above, and how web apps and mobile apps compare.The main differences lie in implementation, platform compatibility, and the investment required to build and maintain. **Deployment** Web apps are deployed through a mobile browser, while a mobile app has its code downloaded locally to the user's device.This makes it easier for new users to access and use a web application. They can follow a link to the app or find it on Google and start using it right away.With a mobile app, users have to take action and download the app to their device before they can open and use it.Although this acts as a sticking point, it also makes mobile apps more "sticky" as they remain on the user's device until they are uninstalled. The mobile app icon remains on the user's home screen, who can return to it with a single tap. A web app disappears from the device when the browser tab is closed, and depends on the user consciously re-entering the URL. **Platform Compatibility** Web apps can work on any device with a browser and internet connection, unlike mobile apps, which can only work on the platform they've been coded for.This can be an advantage or a disadvantage. On the one hand, it's an advantage for web applications, as a codebase can serve a larger number of users across a wider range of platforms.On the other hand, mobile apps are capable of delivering a deeper, more immersive, and more satisfying experience on mobile devices, as they have been created specifically for the platform on which they run.Although web applications are more accessible on different platforms, their user experience suffers when trying to cater to multiple types of users. **Investment (time, money, effort)** Web apps are faster, easier, and cheaper to create than mobile apps, in almost all cases.The technology behind web applications is less complicated, and there is a greater abundance of developers and development tools available to create web applications.In comparison, native mobile app development is difficult. It takes a long time to program mobile apps, developers are harder to find, and fees are higher.Creating a native mobile app typically costs between 5 and 6 figures, and requires two distinct development and creation teams to launch on the two most popular mobile operating systems (iPhone and Android).However, cross-platform and hybrid applications reduce this investment to a greater or lesser extent, sometimes saving up to 80% or more of the cost of developing native applications. **How to choose the best type of application for your project** There isn't necessarily a "best" type of app between mobile and web apps. The best type of app depends on what you want to achieve, your target audience, your budget, and the time you have to develop it.In the next section, we'll explain how to choose the type of app that's right for you. **Think About Your Target Audience** Think about who you're building the app for. If your target audience primarily uses mobile devices, you should create a mobile app.If you're not sure, the data shows that a large portion of your target audience is likely to be mobile users. Today, more people around the world connect to the internet from mobile than from computers, and this number continues to rise.If you think your audience uses a variety of different platforms, you may want to build a web app first, to cater to a wider range of users. Hybrid apps can also be a good option to serve more users on more platforms. **Required functionality of your application What features should the app have? ** If your app needs access to the device's hardware or sensors, then a mobile app is a must. The same is true if you're creating an app where users take/upload photos or videos, such as an Instagram/TikTok/Snapchat-type app. **Do you need or want your app to be accessible offline? If so, you'll need a mobile app.** There's also location features, push notifications, tap and swipe functionality, and other features that aren't necessarily unique to mobile apps, but are much easier and work much better with mobile apps than they do with mobile web apps.On the other hand, do you need your app to work on both desktop and mobile? If so, you'll need to create a web app, or at least a hybrid app that can work not only on mobile devices. **Budget** **How much money do you have to spend on your project?** If you're on a tight budget, you might want to create a web app instead of a mobile app. Web apps are much cheaper, especially when compared to native mobile apps.They are also much cheaper to maintain. When you build a mobile app, you have to factor in 15-20% of the initial development cost for maintenance and updates each year.Due to the lower investment, many Mobile or Web app development company choose to build their app as a web app first, to use it as an MVP or "proof of concept" to generate the backing or investment they need to build a mobile app.Keep in mind, however, that there are some ways to create mobile apps that reduce the cost significantly. Finally, consider the timeframe you want.It takes a lot of time to build native mobile apps (often more than 6 months of full-time development). **[Web application development company](https://www.sparkouttech.com/web-application-development/)** can be created and launched much faster.Hybrid mobile apps, again, offer an interesting compromise between the two. Some hybrid app builders allow you to get fully functional mobile apps up and running in as little as two weeks.
sparkouttech
1,723,190
Oracle Performance Tuning: Tips, Tricks and Hidden Secrets
Ever wondered why Oracle Performance Tuning is so thrilling for budding Oracle developers and...
0
2024-01-10T12:35:54
https://dev.to/dbajamey/oracle-performance-tuning-tips-tricks-and-hidden-secrets-561c
oracle
Ever wondered why Oracle Performance Tuning is so thrilling for budding Oracle developers and DBAs? The answer lies in the art of transforming a sluggish database into a finely-tuned powerhouse. At first, it might seem as complex as decoding a mad scientist’s experiment. But fear not! This article is your guide, turning the daunting into doable with practical examples. Soon, tuning Oracle performance will become second nature to you. And the end result? Turning your Oracle into a speed powerhouse will make your users and your boss happy. You’ll also level up your game to become an Oracle expert. So, let’s dive into this challenging but fun journey - https://codingsight.com/oracle-performance-tuning-tips-and-tricks/
dbajamey
1,723,204
pydroid
firstly, a big hi. i'm new here. does anyone use (or have used) pydroid3? after the last android...
0
2024-01-10T12:52:53
https://dev.to/anorieni/pydroid-3cl7
firstly, a big hi. i'm new here. does anyone use (or have used) pydroid3? after the last android update, it started acting funny. it doesn't find, nor can it install requests. anyone knows what is this about? thanks
anorieni
1,723,229
Webinar: Shift-Left: Accelerating Quality Assurance in Agile Environments [Experience (XP) Series]
Agile methodologies guide teams to deliver high-quality products efficiently and quickly in the...
0
2024-01-10T13:18:45
https://dev.to/yashbansal651/webinar-shift-left-accelerating-quality-assurance-in-agile-environments-experience-xp-series-369
Agile methodologies guide teams to deliver high-quality products efficiently and quickly in the ever-changing software industry. As the demand for rapid innovation increases, QA in the Agile process becomes pivotal. Therefore, by implementing the shift-left concept, quality assurance tasks are moved earlier in the development lifecycle, challenging traditional testing timelines. Quality-Driven Development (QDD) is a guiding beacon for improving testing practices in the shift-left revolution. It is a transformative concept, changing how teams or organizations view the connection between quality and development. It’s a mindset that highlights collaboration, proactive testing and shared responsibility for software quality within the development team. {% youtube 2v98Q0SUNXk %} ***This*** [***Random ISBN***](https://www.lambdatest.com/free-online-tools/random-isbn-generator?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=free_online_tools) ***Generator is a free easy-to-use tool to generate random and unique ISBN numbers. Generate the most accurate and verified ISBNs instantly.*** Tune in to our XP webinar featuring Steve, who unravels the intricacies of shift-left and QDD, where he shares practical strategies and real-world experiences, offering a deep dive into how methodologies accelerate [quality assurance](https://www.lambdatest.com/learning-hub/quality-assurance?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=learning_hub) in Agile environments. In this blog, we’ll embark on a journey through the dynamic realm of shift-left, exploring how it accelerates the QA process and fosters a culture of continuous improvement in Agile environments. So, without any further ado, let’s dive deep in. ### About LambdaTest XP Series Webinar & Speaker LambdaTest Experience (XP) Series includes recorded webinars/podcasts and fireside chats featuring renowned industry experts and business leaders in the testing & QA ecosystem. In this XP Series webinar, we had a distinguished speaker, [Steve Caprara](https://www.linkedin.com/in/scaprara/), Director of QA at [Plexus Worldwide](https://www.linkedin.com/company/plexus-worldwide-inc-/). With nearly two decades of experience in the Information Technology domain, Steve has worn multiple hats, from hands-on testing to shaping automation frameworks. His trailblazing work includes the concept of quality-driven development (QDD), where he has seamlessly woven quality into the fabric of software development, influencing how organizations approach building top-notch software. ![image](https://cdn-images-1.medium.com/max/720/0*-WQTd1T_oliZIDqR.png) ***Need a barcode for your products? Create high-quality custom barcodes using our online*** [***barcode generator***](https://www.lambdatest.com/free-online-tools/barcode-generator?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=free_online_tools)***. Try our barcode generator tool and get one in seconds.*** ### Decoding Shift Left: QA’s Early Advantage In the context of QA, Steve presents the idea of “[shift-left](https://www.lambdatest.com/learning-hub/shift-left-testing)” and discusses how it affects organizations at various phases. Steve stressed the difficulties seasoned businesses and startups encounter while implementing this strategy. Steve uses an illustration of the complex decision-making required in software development to reflect on how organizations often concentrate on the final product without exploring the nuances of its development. Steve agrees with the widespread belief that “if it ain’t broke, don’t fix it.” Still, he also emphasizes the need for a shift-left mindset, especially in organizations where quality assurance is frequently an afterthought. While acknowledging QA’s importance in identifying issues before they affect customers, he also highlights the possible block it may cause in the development cycle. ***Want to create stunning bitmap images? Try our free online*** [***Random Bitmap***](https://www.lambdatest.com/free-online-tools/random-bitmap-generator?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=free_online_tools) ***Generator tool to generate bitmap images for your website or project. Give it a try today.*** ### Implementing Shift Left: Pro Tips for Agile Efficiency Steve delves into the shift-left approach’s implementation, providing insights into the challenges and changes required in traditional organizational structures. He describes the commonly used waterfall approach and the practice of throwing development work over the wall to QA, referring to it as a data-driven development cycle. ![image](https://cdn-images-1.medium.com/max/720/0*GWA8GnINd3_gICMS.png) Steve emphasizes the situation in which QA is tasked with testing code and frequently has to return it with errors, bugs, and defects. He recognizes the need to change this mindset and discusses the difficulties of moving from siloed structures to cross-functional teams. He emphasizes the fear of change and the importance of leaders explaining the “why” behind the shift and promoting skill development in multiple areas. ***Need to test your application’s handling of Unicode characters? Generate random UTF8-encoded text with our free online*** [***Random UTF8***](https://www.lambdatest.com/free-online-tools/random-utf8-generator?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=free_online_tools) ***Generator Tool! Try it Now Today.*** ### Cross-Functional Team Challenges: A Deep Dive Steve discusses the difficulties in creating cross-functional teams, talking about anxiety, resistance, and challenges that come with dismantling silos and switching from solo efforts to cooperative team efforts. Steve emphasizes that the team must take on challenges instead of depending on the person. ![image](https://cdn-images-1.medium.com/max/720/0*kH_WQmFDL_0I8jdd.png) Steve promotes a paradigm that fosters a mindset of collective achievement by showing how team success leads to opportunities for individual growth. He also highlights a famous book, “[The Phoenix Project](https://www.audible.in/pd/The-Phoenix-Project-Audiobook/B079TMP9K8),” as an essential source demonstrating how cross-functional teams affect DevOps procedures and organizational success. ### Quality Driven Development (QDD): Redefining Excellence Quality-Driven Development (QDD) is a paradigm-shifting approach that sets itself apart from well-established methodologies such as [Behavior-Driven Development (BDD)](https://www.lambdatest.com/blog/behaviour-driven-development-by-selenium-testing-with-gherkin/?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=blog) and [Test-Driven Development (TDD](https://www.lambdatest.com/learning-hub/test-driven-development?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=learning_hub)) as it covers the organization’s entire testing approach, in contrast to these practices. Steve highlighted that when it comes to shift-left, there are many benefits to starting the development cycle with quality, which minimizes hot fixing in production, speeds up releases, lowers code churn, and eases release issues. ![image](https://cdn-images-1.medium.com/max/720/0*o110HlSpwUQuGD8t.png) Although the initial pace may be slower, the upfront investment in quality pays off in the long run regarding speed, leverage, agility, and release quality fidelity. This is a great place to apply Dr. Stephen R. Covey’s “Beginning with the end in mind” principle, which emphasizes the importance of producing high-quality deliverables from the very beginning of the project. To make QDD effective, businesses must communicate with upper management regularly. Since the success of QDD depends on breaking down silos and considering QA as an essential component of the development team rather than as a stand-alone organization, therefore, QDD strategy encourages teams to work together, be self-assured, and have a shared sense of ownership. ***Generate a secure Hashed Message Authentication Code (HMAC) using our online*** [***HMAC generator***](https://www.lambdatest.com/free-online-tools/hash-mac-generator?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=free_online_tools) ***to ensure the integrity and authenticity of your digital content.*** ### Automated Testing Explained: Efficiency and Beyond Steve highlighted the difficulties and myths surrounding [automated testing](https://www.lambdatest.com/learning-hub/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=learning_hub) in the shift-left methodology by highlighting the drawbacks of a daily release cycle, releases, and hotfixes. However, the need for infrastructure, pipelines, and DevOps is essential to implement shift-left effectively, coordinating for release structures. Therefore, extensive testing is required, and the team has to opt in between automated and [manual testing](https://www.lambdatest.com/learning-hub/manual-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=learning_hub). ![image](https://cdn-images-1.medium.com/max/720/0*RIXhyJxjEo6RW90Y.png) ### Feedback Loops: Balancing Agility with Precision Steve emphasizes the importance of moving testing to the left of the development cycle to improve feedback loops and guarantee better code quality. He draws attention to the problems with delayed bug detection that arise when testing is postponed and how this affects developers. Therefore, QA participation is crucial in providing developers with prompt feedback upon bug discovery and prioritizing bug prevention. ![image](https://cdn-images-1.medium.com/max/720/0*AzYPA9zxkuNPzFzN.png) Steve suggested incorporating developers in testing to use [automation testing frameworks](https://www.lambdatest.com/blog/automation-testing-frameworks/?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=blog) and support [parallel testing](https://www.lambdatest.com/blog/what-is-parallel-testing-and-why-to-adopt-it/?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=blog) alongside manual testing. Based on his experience, he recommends establishing a collaborative, cross-functional setting where developers can contribute tests. Applying shift-left theory, he argues that [dynamic testing](https://www.lambdatest.com/learning-hub/dynamic-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=learning_hub) through APIs is preferable to static as it allows thorough validations. Therefore, adopting a shift-left approach not only improves the overall development cycle but also helps in accelerating testing and consistency. ***Need a way to verify your data’s integrity? Our*** [***CRC32 Hash Calculator***](https://www.lambdatest.com/free-online-tools/crc32-hash-calculator?utm_source=devto&utm_medium=organic&utm_campaign=jan_10&utm_term=ap&utm_content=free_online_tools) ***generates strong checksums. Keep your information secure and tamper-proof. Get started today.*** ### Q&A Session **Q: According to you, when is an ideal time for teams to initiate a shift-left approach in their automation journey?** **Steve:** **YESTERDAY!!** 😆 According to me, the perfect time to shift left in automation is futile. Many organizations struggle to find the right moment, but there’s always a new project or roadblock. Inception is the ideal time to foster a mindset shift. Despite pushback, taking the initial step is crucial for lasting change. It may be gradual, but achieving 1% improvement daily leads to significant progress. For those new to automation, start with tools like Selenium IDE or LambdaTest for cross-browser testing. Initiate small steps, record successes, and propose a POC to leadership for a dedicated time. Act now takes small initiatives that yield substantial improvements. Don’t wait for the perfect moment; start your automation journey now. **Q: What role do manual testers play in shift-left, and how can they contribute beyond automation to ensure comprehensive testing and quality?** **Steve:** Manual testers should prioritize developing their automation skills for career growth, acknowledging its importance but not as the sole path. Despite transitioning a manual team to automation, manual testers remain indispensable. They bring a unique perspective to the testing lifecycle, excelling in identifying nuanced issues and unforeseen boundary conditions. Their distinct thinking adds value, as showcased by a tester finding a potential release blocker through an unconventional scenario. The jest around skilled manual testers at an organization reflects their recognized contributions. While automated team members focus on integration issues and regression breaks, adept manual testers excel in uncovering functional boundary conditions. Manual testers should embrace their role, demonstrating their ability to think differently and surpass expectations. Their exceptional findings showcase their vital role in testing, proving their unique approach in the evolving automation landscape. ### Closing Remarks: Hope It Sparked Ideas! As we fall curtains on this journey of shift-left and QDD, we give our special thanks to Steve for sharing his invaluable perspectives. We hope this webinar has been a food for thought in implementing shift-left methodologies in your development and testing cycle. But this adventure with us hasn’t paused yet; we’ll continue with more such exciting journeys, so stay tuned with us for upcoming episodes of our [LambdaTest Experience (XP) Series](https://www.lambdatest.com/xp-series) webinars! Until then, keep innovating and testing till more revelations for you! #LambdaTestYourApps❤️
yashbansal651
1,723,388
Growing Up with Disney: The Influence on Child Development
For over a century, children have grown up in Disney's charming world, where creativity knows no...
0
2024-01-10T15:38:43
https://dev.to/huxley133/growing-up-with-disney-the-influence-on-child-development-297b
media, learning, marketing
For over a century, children have grown up in Disney's charming world, where creativity knows no limitations. From the inception of the early Disney Brothers studio in October 1923 to its current status as a media giant, Disney's impact on young minds is indisputable. Some critics have expressed their concerns, saying kids exposure to Disney’s content is inevitable, taking into account Disney's dominance of the American media landscape (approximately one sixth of the industry at large), as highlighted by media studies researcher Christopher Bell, raising concerns about the values depicted in its films, others see a different side to the issue. Disney's ambition to become a worldwide leader in entertainment and information has spurred controversy regarding its massive influential reach on child development. Some applaud its capacity to elicit creativity and imagination, while others wonder whether its core financial motives are in the best interests of young consumers. In this inquiry, we look at Disney's varied effect on children as they start on a journey of growth and learning through the lens of animated storytelling. ## Disney’s Roots and Early Beginnings Walt Disney Company, mainly known as Disney, is a hallmark of entertainment and media, with origins dating back to the early twentieth century. Founded in 1923 by Walt Disney and Roy O. Disney, it began as a tiny animation firm that introduced the world to legendary characters such as Mickey Mouse. Over time, Disney broadened its horizons to include film production, television, theme parks, and other projects. The company's rise from a small animation studio to a multimedia giant is a story of innovation, creativity, and strategic growth. According to public opinion and scholar literature, Disney is often thought of as a cultural and media giant that influences entertainment throughout the globe. However, the media giant has come under backlash and academic scrutiny, with critics stressing on notions such as deviant cultural representation and commercialization of childhood; Disney's massive merchandising operations are viewed as a means of instilling consumerist values in young minds. This is sometimes associated with a wider critique of Disney's business policies, especially its dominance in the media landscape, which some see as a threat to media diversity and inventiveness. Despite these disputes, Disney continues to attract audiences throughout the world, exerting a tremendous influence on the entertainment industry and popular culture. ## The Link Between Disney and Child Development The link between child development and Disney media consumption is a complex subject with several scientific investigations and psychological theories. On the bright side, Disney stories frequently encourage themes of resilience, friendship, and courage, which may instill core principles in young viewers. According to few studies, exposure to Disney stories can help kids develop emotionally, providing a framework for understanding and navigating complicated emotions and social situations. However, there are certain drawbacks to consider. According to studies, Disney's representation of body image, particularly in its classic cartoons, can influence children's conceptions of ideal body shapes. These frequently inaccurate depictions may contribute to body dissatisfaction and lower self-esteem in young generations. This is especially noticeable in the representation of female characters, who are typically represented with exaggerated, unrealistic physical characteristics. Another area of concern is the perpetuation of gender stereotypes. Despite current efforts to diversify character roles, classical Disney material frequently perpetuates traditional gender norms, which may influence children's comprehension and acceptance of these roles. This element can limit children's self-expression and impede the development of a more appropriate understanding of gender roles. **Social Learning Theory** ( Albert Bandura ): children acquire behaviors, standards, and values by observing and copying those they see in the media. Disney movies, with their vibrant characters and captive tales, can have profound effects on children's behaviors and perspectives. Children may emulate the courage or generosity of Disney heroes or heroines, but they may also absorb more subtle messages about gender roles or cultural norms and standards. **Cultivation Theory** (George Gerbner): According to cultivation theory, prolonged exposure to media material affects viewers' conceptions of reality. In the case of Disney cartoons, this might imply that children who watch these films on a regular basis may develop distorted notions of romance, beauty standards, and good versus evil as a result of the repeated themes and depictions in these stories. **Objectification Theory** (Fredrickson and Roberts): This hypothesis proves particularly valuable in terms of body image. It implies that the presentation of unrealistic body ideals in media, even animated films, might lead to self-objectification, particularly in young females. This can have an affect on both body image and self-esteem. ## The Bright Side of Disney Disney's influence on children's cognitive development is broad, going far beyond its screen-based media. Its storytelling, which features complex plots and a broad cast of characters, is vital for promoting language development and improving problem-solving capacities. As children read these stories, they gain an awareness of narrative structure, cause and effect, and character development. This immersion in storytelling not only improves memory retention by having children retain and retell narrative details, but it also promotes creative thinking and imagination, both of which are necessary components of cognitive flexibility. The depiction of ethical quandaries and varied views in Disney stories can also aid in the development of critical thinking and empathy. In addition to these cognitive benefits Disney’s creative and interactive kits such as “[Alice In Wonderland Paint By Numbers](https://numeralpaint.com/paint-by-number/alice-in-wonderland/)”, “[Mickey Mouse Paint By Numbers](https://numeralpaint.com/paint-by-number/mickey-mouse/)” and “[Tinkerbell Paint By Numbers](https://numeralpaint.com/paint-by-number/tinkerbell/)” offer unique educational advantages blended with the enchantment of engaging characters, boosting artistic tendencies in kids as they learn to manipulate tools and colors. Such artistic activities enhance the development of hand-eye coordination and spatial awareness. **In conclusion**, Disney's beautiful world weaves its charm into the minds of kids, artfully blending learning and relaxation. The soft strokes required to bring a [Frozen paint by numbers](https://numeralpaint.com/paint-by-number/frozen/) scene to life exemplify this seamless integration, signifying unlimited possibilities in which education and creativity continue to coexist in pleasant harmony.
huxley133
1,723,486
A IA vai nos substituir?
Você acabou de começar no mundo da programação e, ao mesmo tempo que lê notícias de vagas em...
0
2024-01-10T20:44:42
https://dev.to/giovannibayerlein/ia-vai-nos-substituir-4nc1
<p>Você acabou de começar no mundo da programação e, ao mesmo tempo que lê notícias de vagas em abundância, também vê profissionais já consolidados abrindo discussões sobre um futuro incerto e sombrio onde as IAs tomarão todos os empregos.</p> <p>Você então começa a pesquisar o assunto e se depara com diversos materiais falando que não, os programadores não serão substituídos e que tudo está a salvo desde que se adaptem e se aprimorem no uso dessas ferramentas de IA (como o ChatGTP, Copilot e etc).</p> Assim como vemos na conclusão da matéria https://exame.com/bussola/a-inteligencia-artificial-vai-substituir-os-desenvolvedores : > Por todos esses motivos, posso afirmar claramente que a IA não deve tomar o emprego dos devs a curto prazo. No entanto, com certeza, o desenvolvedor que souber usar ela a seu favor, seja para completar um código ou tirar uma dúvida pontual, irá se sobressair em relação aqueles que ainda não utilizam tais ferramentas. <p>Não quero generalizar, mas muito provavelmente essas análises que os grandes nomes da tecnologia fizeram acerca do assunto não tiveram um elemento muito importante: um recorte de classe.</p> <p>Quando colocamos uma perspectiva de classe notamos que esse futuro é mais sombrio do que estão te fazendo pensar, portanto nesse texto eu quero trazer uma análise mais pessimista e chamá-lo, meu caro leitor, para a luta.</p> --- ### Aumento da produtividade _As IAs aumentam a produtividade dos profissionais?_ Sim, isso é um fato. Existem matérias sobre o assunto: https://www.zendesk.com.br/blog/como-ia-aumenta-produtividade-funcionario > A IA é um campo multidisciplinar que engloba uma variedade de tecnologias: machine learning, processamento de linguagem natural, visão computacional e outras mais. A partir delas, é possível criar ferramentas de automação, análise de dados e comunicação que otimizam o trabalho, aumentando a satisfação dos funcionários e sua produtividade. https://exame.com/inteligencia-artificial/inteligencia-artificial-avanca-entre-empresas-e-aumenta-produtividade-mas-muda-forma-de-trabalhar/ > [...] a adoção cada vez mais ampla de IA deve ser a aposta para dar um salto de produtividade. _Mas isso é bom, né?_ Sim, é ótimo. _Então qual o problema?_ <p>Pois bem, vamos pensar no seguinte cenário:</p> Você trabalha com outras 2 pessoas e vocês 3 tem uma produtividade semelhante. Você começa a usar o ChatGPT para automatizar algumas tarefas, corrigir seu próprio trabalho, melhorias em código, code review mais rápido e etc. E por conta disso acaba se tornando um profissional com muito mais produtividade que os outros 2 juntos. <p>Nesse cenário, o que acha que vai acontecer?</p> 1. A empresa vai manter vocês 3 2. A empresa vai demitir os outros 2 e ficar só com você <p>Ora, para responder essa pergunta podemos consultar a história e ver o que aconteceu, por exemplo, na terceira revolução industrial</p> Em https://brasilescola.uol.com.br/geografia/trabalho-na-terceira-revolucao-industrial.htm, temos: > Ocorreu uma intensa mecanização dos meios rurais e o desenvolvimento de técnicas e mecanismos agrícolas que propiciaram um grande **desemprego** nesse meio, o que contribuiu para a intensificação do êxodo rural, isto é, uma migração em massa da população do campo para a cidade. _Caramba!_ _Então você está me dizendo que isso pode gerar desemprego?_ <p>Isso mesmo, vamos falar sobre isso no próximo tópico.</p> --- ### Escassez de empregos _Então eu não deveria aumentar minha produtividade, assim ninguém seria demitido?_ <p>Pois bem. Primeiro, veja que no exemplo acima eu coloquei você como a pessoa que se manterá empregada, mas quem garante que você não será o desempregado?</p> <p>Segundo. Sim, teremos uma alta no desemprego, mas isso não é culpa do aumento da produtividade. A culpa disso é a lógica do lucro que move as empresas.</p> <p>Se uma empresa consegue lucrar o mesmo pagando menos salários, ela o fará.</p> _Mas pode deixar que eu me garanto. Eu estudo todos os dias, estou por dentro das novidades em tecnologia, tenho um bom networking. Acho difícil ficar sem emprego_ <p>Olha. Até concordaria com essa premissa, mas em que tipo de emprego você estaria?</p> --- ### Aumento do exército industrial de reserva e da exploração <p>Você já conversou com alguém que trabalha como caixa de supermercado ou como vendedor em shopping?</p> <p>Esses trabalhadores são um dos mais explorados: alta carga horária, sem descanso, salários ínfimos, direitos trabalhistas comumente negados, uma competitividade enorme.</p> _Mas essas pessoas não poderiam negociar melhores condições?_ <p>Tecnicamente nada impede.</p> _Mas então porque não negociam?_ <p>Se esses trabalhadores não estão felizes e reclamam das condições são demitidos, pois existem outros milhares de desempregados querendo trabalhar e, por conta do desespero, aceitariam essas condições inumanas de trabalho.</p> <p>Isso se chama exército industrial de reserva.</p> <p>Podemos resumir esse conceito em: pessoas desesperadas por emprego que aceitariam, literalmente, qualquer coisa.</p> <p>A área de tecnologia está indo na mesma direção. O que acontecerá com as pessoas que forem demitidas por não terem se adequado à nova dinâmica de trabalho com as IAs?</p> <p>Isso mesmo, elas ficarão na "reserva".</p> <p>E o que acontecerá com você que, por sorte, não foi demitido?</p> <p>Isso mesmo, vai precisar aceitar qualquer condição de trabalho imposta, seja ela redução de salário, aumento de carga horária ou cortes em benefícios.</p> _Mas e se eu reclamar para os meus chefes?_ <p>Não tem problema. Você será facilmente demitido já que não haverá falta de mão de obra.</p> _Então precisamos abolir o uso de IAs?_ --- ### As IAs são o problema? <p>Apontar as IAs como problema é neo-ludismo.</p> Em https://www.historiadomundo.com.br/idade-contemporanea/ludismo.htm explica >O ludismo foi um movimento de trabalhadores que se uniram e revoltaram-se contra as máquinas no princípio da Revolução Industrial. A ação organizada dos ludistas consistia em invadir uma indústria têxtil e promover a destruição das máquinas que produziam as mercadorias. Esse movimento iniciou-se em Nottingham e espalhou-se por toda a Inglaterra, entre 1811 e 1816. <p>Na revolução industrial, o problema não eram as máquinas. Assim como hoje o problema não são as IAs.</p> <p>As máquinas na revolução industrial e as IAs contemporâneas têm o mesmo fim: ajudar os trabalhadores a executarem suas tarefas. Mas vimos que o resultado foi desemprego e péssimas condições de trabalho.</p> <p>O problema é o modo de produção capitalista e essa lógica de lucro acima de qualquer coisa.</p> --- ### Solução _E como podemos desfrutar das IAs sem causar uma crise trabalhista?_ <p>Muito simples: socializar o acesso, a construção e a posse dessas tecnologias.</p> <p>Em uma sociedade socialista, a tecnologia é utilizada para os trabalhadores trabalharem menos, já que a produção aumenta com essas tecnologias, e sem redução nos ganhos, ou em benefícios, ou qualquer coisa do gênero.</p> --- <p>É importante nós pautarmos esse debate já que o futuro da nossa profissão está em jogo.</p> <p>O avanço da IA na lógica capitalista vai comprometer todas as profissões, não só área de TI.</p> <p>Espero que eu tenha sido claro com as ideias que gostaria de passar e também espero que sirva de combustível para nos organizarmos como classe.</p> `print("programadores do Brasil, uni-vos!")`
giovannibayerlein
1,723,527
Ibuprofeno.py💊| #48: Explica este código Python
Explica este código Python Dificultad: Básico ## Reto #48 lista_1 =...
25,824
2024-02-07T14:23:40
https://dev.to/duxtech/ibuprofenopy-48-explica-este-codigo-python-20h4
python, spanish, learning, beginners
## **<center>Explica este código Python</center>** #### <center>**Dificultad:** <mark>Básico</mark></center> ```py ## Reto #48 lista_1 = ["item1", "item2"] lista_2 = [] lista_3 = ["item3", "item4"] print(lista_1 + lista_2 + lista_3) ``` 👉 **A.** `['item1', 'item2', [], 'item3', 'item4']` 👉 **B.** `['item1', 'item2', [''], 'item3', 'item4']` 👉 **C.** `['item1', 'item2', 'item3', 'item4']` 👉 **D.** `['item1', 'item2', '', 'item3', 'item4']` --- Respuesta en el primer comentario.
duxtech
1,723,592
Frost warning with Home Assistant 🥶
Do you have a garden and want to be notified when winter is coming? I'll show you how easy it is...
0
2024-01-10T19:02:58
https://blog.disane.dev/en/frost-warning-with-home-assistant/
![](https://blog.disane.dev/content/images/2024/01/frost_warning-with-home-assistant_banner.jpeg)Do you have a garden and want to be notified when winter is coming? I'll show you how easy it is 🏡 --- Every garden owner will know the problem: _Winter is coming._ ![](https://blog.disane.dev/content/images/2023/12/image.gif) There are many things in the garden that should be switched off or prepared accordingly before the first frost, as otherwise they will be damaged and therefore need to be properly winterized. This also applies to my garden irrigation and all other things. ## Preparations 🛠️ In order to know when the first frost is coming, you need to prepare a few things. This includes creating a sensor that queries precisely this data and determines it as a `Boolean` value. Accordingly, the sensor can accept the value `true` or `false`. To do this, we create a new template sensor in our `configuration.yaml`. The template sensor queries the data of the Weather entity and calculates whether the temperature at the specified location falls below 0°C and if so, for how long. In addition, the attributes also store what the lowest temperature is and when (i.e. on which date) it occurs. Please remember to adapt the `CHANGE THIS` to your circumstances: ```yaml - platform: template sensors: frost_warning: friendly_name: "frost warning" unique_id: "39a5e512-b92f-449c-b681" icon_template: "mdi:snowflake-alert" value_template: >- {% if states('sensor.min_temp_forecast') | float < 0 %} on {% else %} off {% endif %} attribute_templates: frostdays: >- {% set ns = namespace(frostdays=0) %} {%- for fc in states.weather.CHANGE THIS.attributes.forecast -%} {%- if fc.templow < 0 -%} {%- set ns.frostdays = ns.frostdays + 1 -%} {%- endif -%} {%- endfor -%} {{ns.frostdays}} first_frost_date: >- {% set ns = namespace(date=0) %} {%- for fc in states.weather.CHANGE THIS.attributes.forecast -%} {%- if fc.templow < 0 and ns.date == 0 -%} {%- set ns.date = fc.datetime -%} {%- endif -%} {%- endfor -%} {{ns.date}} date_low: "{{state_attr('sensor.min_temp_forecast', 'datetime')}}" temp_low: "{{states('sensor.min_temp_forecast')}}" forecastdays: "{{state_attr('sensor.min_temp_forecast', 'forecastdays')}}" ``` In order for the forecast to be fetched correctly, the `Weather` entity must be configured. You can easily do this with the following instructions: [Weather![Preview image](https://www.home-assistant.io/images/default-social.png)Instructions on how to setup your Weather platforms with Home Assistant.](https://www.home-assistant.io/integrations/weather/) ![](https://blog.disane.dev/content/images/2023/12/image-1.gif) ## Winter is coming, the automation ☃️ With the created sensor, we can therefore determine whether and, above all, when exactly and for how long it will freeze. We can then be notified or other things can be triggered automatically. You can use the following automation for this: ```yaml alias: "Frost Warning: Send Notification" description: >- If frost is forecast in the weather forecast, then send out a notification so that the water is turned off. trigger: - platform: state entity_id: sensor.frost_warning from: "off" to: "on" condition: [] action: - service: notify.notify data: title: Winter is coming! message: >- From {{ as_timestamp(state_attr("sensor.frost_warning", "first_frost_date")) | timestamp_custom("%a, %d.%m.%y", True) }} it should freeze with temperatures up to {{ state_attr('sensor.frost_warning', 'temp_low') }}°C on {{state_attr('sensor.frost_warning', 'frostdays') }} days in the coming {{state_attr('sensor.frost_warning', 'forecastdays') }} days. data: priority: 1 mode: single ``` And from now on you will be notified when frost sets in 😊 Jon Schnee would be proud of you (or maybe not, because he no longer has to stand guard) 👏 ![](https://blog.disane.dev/content/images/2023/12/image-2.gif) --- If you like my posts, it would be nice if you follow my [Blog](https://blog.disane.dev) for more tech stuff.
disane
1,723,817
Tech Talks NYC ~ w/ Cloudflare & Jam.dev
Cloudflare and Jam are hosting Tech Talks in New York! Get together 80+ engineers to learn from each...
0
2024-01-11T00:54:56
https://dev.to/ivanhapaz/tech-talks-nyc-w-cloudflare-jamdev-291f
eventsinyourcity, techtalks
Cloudflare and Jam are hosting Tech Talks in New York! Get together 80+ engineers to learn from each other, and discover new ways of doing things. Expect lightning talks about engineering at scale - no demos or sales pitches, just technical talks. Plus food, drinks and a good time! 🍕🍻 Register [here](https://lu.ma/techtalksNY) Hope to see some of you there :)
ivanhapaz
1,723,957
2024 Ultimate Guide to JavaScript Interview Questions and Answers
JavaScript is a dynamic language, essential in web development and increasingly important in other...
0
2024-01-11T08:15:36
https://www.webdevstory.com/javascript-interview-questions/
javascript, webdev, frontend, interview
JavaScript is a dynamic language, essential in web development and increasingly important in other areas of software development. This guide compiles a comprehensive list of JavaScript interview questions, suitable for beginners to advanced developers, to help you prepare effectively for your next interview. ### 1\. Fundamental Concepts in JavaScript ![JavaScript coding session essentials with coffee and laptop](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/37p2gqupxjr6tvk2x1zd.png) When dealing with JavaScript interview questions on basic concepts, it’s essential to understand the core principles of the language. Each question in this part will help you learn the basics of JavaScript programming and get ready for common scenarios. ***Explain the difference between let, const, and var.*** * `var` is function-scoped and has been the way to declare variables in JavaScript for a long time. We can re-declare and update it. * `let` is block-scoped, can be updated but not re-declared. * `const` is also block-scoped, but once assigned, we cannot change its value. ***Explain the use of the spread operator in JavaScript.*** The spread operator `(...)` allows an iterable, like an array, to be expanded in places where zero or more arguments or elements are expected. ***Demonstrate function declaration and expression methods in JavaScript.*** * Declaration: `function myFunction() { /* code */ }` * Expression: `const myFunction = function() { /* code */ };` ***What are template literals in JavaScript, and how do they improve string handling?*** Template literals provide a concise and readable way to create strings and can include expressions, multi-line strings, and string interpolation. Use backticks `(``)` to define strings, allowing embedded expressions and multi-line strings for better readability. ***Provide an example of nested template literals and their use case.*** Nested template literals allow embedding a template literal within another. For example: ```javascript const user = {name: 'Alice', age: 25}; const greeting = `Hello, ${user.name}. Next year, you'll be ${`${user.age + 1}`}`; ``` We can use this feature to create complex strings when you need conditional logic or additional calculations within an embedded expression. ***List and explain the data types in JavaScript.*** * `Number:` Numeric values * `String:` Textual data * `Object:` Collections of properties * `Null:` Represents a non-existent or empty value * `Undefined:` A declared but unassigned variable ***What are the differences between null and undefined in JavaScript?*** `Null` is an assignment value showing that a variable points to no object. Declaring a variable without assigning it a value means it's `undefined`. ***Explain the difference between primitive and reference types in JavaScript.*** Primitive types (like numbers and strings) store values directly, whereas reference types (like objects and arrays) store references to memory locations. ***Write an if-else statement to check if a number is even or odd.*** ```javascript let number = 4; if (number % 2 === 0) { console.log('Even'); } else { console.log('Odd'); } ``` ***Describe the purpose and usage of the switch statement in JavaScript.*** We can use switch statements for complex conditionals with multiple cases, providing a more readable alternative to multiple if-else statements. ***How does JavaScript handle truthy and falsy values in conditional statements?*** In JavaScript, values like `0`, `null`, `undefined`, `NaN`, `"" (empty string)`, and `false` are `falsy` and will not execute in a conditional. JavaScript considers all other values as truthy. ***Describe the concept of variable scope.*** * **Scope:** Determines the visibility or accessibility of variables during runtime. * **Global Scope:** Variables declared outside functions are accessible anywhere. * **Local Scope:** Variables declared within a function are accessible only inside it. ### 2\. Advanced JavaScript Interview Questions Explored ![Toy robots beside 'CODING' blocks illustrating JavaScript learning](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t4pxrte3i3p9mhmmcgyx.png) As you progress to more advanced JavaScript interview questions, understand the language’s deeper concepts. This section aims to prepare you for these complex challenges. ***Explain callback hell and its disadvantages.*** We use `callback hell` to describe a situation with many nested callbacks in your code. Think of it as a layer cake of functions where each layer depends on the one above it. This makes our code look like a pyramid, often called the `pyramid of doom`. The downside? It makes our code hard to read and even harder to debug. Plus, each extra layer adds complexity, making future changes a headache. ***Explain the concept of Promises in JavaScript and their advantages over traditional callback functions.*** Promises represent the eventual completion (or failure) of an asynchronous operation and its resulting value. They provide better error handling and avoid callback hell by allowing chaining. ***How is a Promise different from a callback, and what are the three states of a Promise?*** Promises to improve upon callbacks by offering better code readability and easier error handling. A Promise can be in one of three states: pending, fulfilled, or rejected. ***What are the benefits of async/await?*** `Async/Await` offers a cleaner, more readable way to write asynchronous code compared to Promises. ***Describe the role of the Event Loop in JavaScript’s asynchronous behaviour.*** The Event Loop handles asynchronous callbacks. It’s a continually running process that checks if the call stack is empty and then executes callbacks from the event queue. ***Explain map, filter, and reduce methods.*** * `map:` Transforms each item in an array, returning a new array. * `filter:` Creates a new array with elements that pass a test. * `reduce:` Reduces an array to a single value. ***How does the forEach method differ from the map method in JavaScript?*** `forEach` executes a provided function once for each array element but does not return a new array, unlike map, which creates a new array with the results of calling a function on every element. ***Explain the use of some and every method in JavaScript.*** Some tests whether at least one element in the array passes the implemented function, returning a boolean. Every check checks if all elements in the array pass the test. ***What is a closure, and how can it be used to create private data?*** A function that remembers its outer scope, enabling data hiding. This capability makes closures invaluable for data hiding and encapsulation. ***How does this keyword work in JavaScript?*** `this` refers to the object it belongs to, and its value can change based on the context. In JavaScript, this refers to the object it belongs to. Methods like `bind()`, `call()`, or `apply()` explicitly set its value. ***Provide an example of a closure in a practical scenario.*** A common use of closures is event handling. A closure allows access to an outer function’s scope from an inner function, maintaining the state in between events. ***In what scenarios can the value of this be unpredictable in JavaScript?*** In event handlers, asynchronous functions, or when assigning functions to variables, unintended binding of `this` to a different object may occur. ***Explain the JavaScript event loop and its role in asynchronous operations.*** Facilitates asynchronous execution in JavaScript by managing a queue of messages and executing them sequentially. ***How does JavaScript handle asynchronous operations within its single-threaded model?*** JavaScript uses the event loop to manage asynchronous operations. The call stack executes the synchronous code, while the event queue holds asynchronous callbacks. The event loop checks the call stack and, if empty, transfers the next function from the event queue to the call stack. ***Explain the difference between microtasks and macrotasks in JavaScript’s event loop.*** JavaScript processes microtasks, like Promise resolutions, after the current script and before macrotasks. Macrotasks (like `setTimeout`, `setInterval`, and `setImmediate`) are processed at the end of each run of the event loop. ### 3\. Prototypal Inheritance and ES6 Features ![Fountain pen writing feature list for JavaScript interview questions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/l06xr2eesteyhnjcr1dc.png) Introducing [ES6](https://www.webdevstory.com/mastering-es6-for-react/) brought significant changes to JavaScript. This section’s JavaScript interview questions will help you show your up-to-date knowledge of these modern features. ***Explain prototypal inheritance in JavaScript.*** In JavaScript, objects can inherit properties from other objects through prototypal inheritance. This sets up a chain of inheritance that diverges from class-based inheritance in languages like Java. ***Describe destructuring in JavaScript.*** Destructuring lets you unpack values from arrays or objects into distinct variables. It makes the code cleaner and easier to understand. ***Explain the difference between classical inheritance and prototypal inheritance.*** Classical inheritance (found in languages like Java) uses classes and creates a hierarchy of parent and child classes. Prototypal inheritance, which JavaScript uses, involves objects inheriting directly from other objects. ***How do you create a new object that inherits from an existing object in JavaScript?*** One way is to use `Object.create()`. For instance, `const newObj = Object.create(existingObj)` creates a new object ( `newObj`) that inherits from `existingObj`. ***What are arrow functions in JavaScript, and how do they differ from traditional functions?*** Arrow functions provide a shorter syntax compared to traditional functions and do not have their own this, arguments, super, or new.target. ***Explain the concept of modules in ES6. How do they differ from older JavaScript libraries?*** ES6 modules allow for modular programming by exporting and importing values from/to different files. Unlike older module systems, ES6 modules undergo static analysis, leading to hoisted imports and exports. ***How do arrow functions differ from traditional functions, particularly in handling this context?*** Arrow functions provide a concise syntax and lexically bind the this value, unlike traditional functions. ***Compare Observables with Promises and Generators for handling asynchronous operations.*** * `Observables:` Represent multiple asynchronous data streams and offer more control with operators like map and filter. * `Generators:` Functions that can be paused and resumed, useful for managing asynchronous flows in a more synchronous manner. ### 4\. Error Handling and Event Delegation in JavaScript ![Close-up of a web developer's screen highlighting error detection during JavaScript interview preparation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uyimfvgp2iztqnyruud3.png) Robust error handling is pivotal in application development. Each JavaScript interview question in this section assesses your ability to write resilient and reliable code. ***How does try-catch help in handling exceptions?*** `try-catch:` Tests a block of code for errors, handling exceptions gracefully. ***Why is event delegation beneficial?*** `Event Delegation:` Sets up event listeners on parent elements to manage events on child elements efficiently. ***What are some best practices for error handling in JavaScript?*** Best practices include using `try...catch` blocks, handling specific errors, cleaning up resources in a `finally` block, and avoiding empty catch blocks. ***Explain the concept of event bubbling and capturing in JavaScript.*** In event bubbling, events start from the deepest element and propagate upwards, while in event capturing, events are caught as they descend in the DOM hierarchy. You can set the useCapture parameter in `addEventListener` to true for capturing. ***How does the finally block work in a try-catch statement?*** The `finally` block executes after the `try` and `catch` blocks but before statements following the `try-catch`. It executes regardless of whether an exception was thrown or caught, making it suitable for cleaning up resources or code that must run after the `try-catch`, regardless of the outcome. ***Can you explain the difference between a SyntaxError and a ReferenceError in JavaScript?*** A `SyntaxError` is thrown when there's a parsing error in the code (e.g., a missing bracket). A `ReferenceError` occurs when a non-existent variable is referenced (e.g., accessing a variable that hasn't been declared). ***Why is event delegation particularly useful in handling dynamic content in JavaScript?*** Event delegation is beneficial for dynamic content (like content loaded via AJAX) because it allows you to bind event listeners to parent elements that exist when the page loads. These listeners can then handle events from child elements that are added to the DOM at a later time. ***Give an example of using event delegation on a list item in a UL element.*** Instead of attaching an event listener to each LI element, you can attach a single listener to the UL element. The listener can then use the event.target property to determine which LI element was clicked. ***Explain the concept of exception propagation in JavaScript.*** Exception propagation in JavaScript refers to the process where an error or exception bubbles up the call stack until it is caught with a catch block or reaches the global context, possibly terminating the script. ***How can custom errors be created and used effectively in JavaScript?*** Custom errors can be created by extending the Error class. This is useful for creating more meaningful and context-specific error messages, which can be helpful for debugging and handling specific error cases in application logic. <a href="https://amzn.to/47zpDgS" target="_blank">![Cover of Cracking the Coding Interview with 189 Programming Questions & Solutions](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nsac8gy9m6bjela8izts.jpg)</a> ***What are the potential drawbacks of using event delegation, and how can they be mitigated?*** Potential drawbacks include the accidental handling of events from unintended child elements and slight performance overhead. We can mitigate these by carefully checking the target of the event within the delegated handler and applying delegation judiciously. ***How does event delegation contribute to memory efficiency in JavaScript applications?*** Event delegation contributes to memory efficiency by reducing the number of event handlers needed. Instead of attaching an event listener to each element, a single listener on a common parent can handle events for all its children, reducing the overall memory footprint. ***In JavaScript, how does error handling differ between synchronous and asynchronous code?*** In synchronous code, errors can be caught using `try-catch` blocks. However, in asynchronous code, especially with Promises, errors are handled using `.catch()` methods or `try-catch` blocks inside async functions. ***What is a TypeError, and how can it be prevented?*** A `TypeError` occurs when a value is not of the expected type, like trying to call a non-function or accessing properties of null. It can be prevented by checking the types of variables before using them. ***How can event delegation affect the performance of web applications?*** Event delegation can improve performance by reducing the number of event listeners needed in an application. This reduces memory usage and can prevent potential memory leaks in applications with dynamically added and removed DOM elements. ***Provide an example of how event delegation can be implemented for keyboard event handling in a form.*** In a form with multiple input fields, instead of attaching a keyup event listener to each field, attach a single listener to the form element. Use event.target within the handler function to determine which input field triggered the event and respond accordingly. ### 5\. JavaScript Interview Questions on Throttling and Debouncing ![Check engine warning light on a vehicle's dashboard symbolizing the importance of debugging skills for JavaScript interview preparation](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6vxzyc93im5zfegjphi3.png) Understand the key concepts of Throttling and Debouncing in JavaScript, crucial for interview discussions on optimizing performance. This section equips you with the know-how to handle related interview questions, particularly in managing repetitive events and user interactions efficiently. ***Explain throttling and debouncing in JavaScript.*** * **Throttling** limits a function’s execution to once in a specified time frame. * **Debouncing** delays a function’s execution until after a specified time has elapsed since its last call. ***Explain the difference between leading and trailing edge in debouncing.*** In debouncing, the leading edge refers to executing the function immediately and then preventing further executions until the timeout. The trailing edge, conversely, only executes the function after the timeout has elapsed since the last call. ***Provide a real-world scenario where throttling would be more appropriate than debouncing.*** Throttling is ideal for rate-limiting scenarios like resizing windows or scrolling, where you want to ensure that the event handler is invoked at a consistent rate, regardless of the frequency of the event firing. ***What is the main drawback of using debouncing in user input scenarios?*** The main drawback is the delay in response. If debouncing is used on user input, it can make the application feel less responsive, as it waits for a pause in input before taking action. ***In what scenarios would you prefer debouncing over throttling?*** Prefer debouncing in scenarios where you want the action to occur only after a period of inactivity, like waiting for a user to stop typing before making an API call in a search bar. ***How can you implement throttling in JavaScript without using third-party libraries?*** Implement throttling with a timer variable to prevent a function from executing again until a certain amount of time has passed since its last execution. ***Discuss the use cases where debouncing is more effective than throttling in event handling.*** Debouncing is effective in situations where the action should be triggered after the event has completely stopped firing, such as in typing in a search bar, where you want the action to occur after the user has stopped typing. ***Can you explain how to use debouncing with API calls in a search input field?*** Debouncing in a search input field involves setting up a debounce function that delays the API call until a certain amount of time (a `delay period`) has passed with no further input. This means the API call will only be made after the user has stopped typing for the duration of the delay period, optimizing performance and reducing unnecessary API calls. ***What are the potential pitfalls in implementing throttling and debouncing, and how can they be avoided?*** Common pitfalls include misunderstanding the use cases (leading to overuse or misuse), setting inappropriate delay times, and issues with this context in JavaScript. These can be avoided by thoroughly understanding the scenarios where each technique is beneficial and testing the implementation under various conditions. ### 6\. Navigating JavaScript Interview Questions on Web Storage ![High-speed train, metaphor for JavaScript's performance in web development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/byeomcgqunci83mbuuxs.png) [Web storage](https://www.webdevstory.com/choosing-web-storage-methods/) is a common topic in JavaScript interview questions, reflecting the need for client-side data management in modern web applications. Here, you’ll learn how to approach these questions confidently. ***Differentiate between localStorage and sessionStorage.*** * `localStorage` stores data with no expiration date. * `sessionStorage` stores data for one session cleared after closing the browser. ***Discuss security considerations when using web storage.*** Web storage is accessible through client-side scripts, so it’s not secure for sensitive data. If you don’t properly sanitize the data, it makes web storage vulnerable to XSS attacks. ***How can you detect if web storage is available in a user’s browser?*** You can detect web storage availability by trying to set an item in the storage and catching any exceptions that occur, which would show that storage is full or not available. ***What is the storage limit for localStorage and sessionStorage, and how does it impact its usage?*** The storage limit for both `localStorage` and `sessionStorage` is typically around 5MB. This limit causes efficient storage use and is not suitable for storing large amounts of data, such as high-resolution images or lengthy video content. ***Can users use web storage across different tabs or windows in the same browser?*** `localStorage` is shared across all tabs and windows from the same origin, while `sessionStorage` is limited to a single tab or window. ***How does the sessionStorage object differ from the localStorage object in terms of lifetime and scope?*** The `sessionStorage` object stores data for one session and clears it when you close the tab or browser, while `localStorage` keeps data even after reopening the browser. `sessionStorage` is limited to a single tab, while localStorage data is accessible across all tabs and windows from the same origin. ***Can we consider web storage as a secure way to store sensitive data? Why or why not?*** Web storage is not secure for sensitive data as it’s easily accessible through client-side scripts and can be vulnerable to cross-site scripting (XSS) attacks. Sensitive data should be stored server-side and transmitted over secure protocols. ***Discuss the use of IndexedDB compared to localStorage and sessionStorage.*** `IndexedDB` provides a more robust solution for storing significant amounts of structured data. Unlike `localStorage` and `sessionStorage`, it supports transactions for reading and writing large amounts of data without blocking the main thread, and it can store data types of strings. ***How can you synchronize data stored in localStorage or sessionStorage across different tabs or windows?*** We use the Window object’s storage event to synchronize data across different tabs or windows when you change localStorage or sessionStorage in another document. ### 7\. Insights into JavaScript Engines ![High-speed train, metaphor for JavaScript's performance in web development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/u8138s74wny9ffda13cz.png) Interviewers often asking questions about JavaScript engines, a topic that many candidates overlook during their preparation. In this section, we’ll address key JavaScript interview questions related to engines like Chrome’s V8 and Firefox’s SpiderMonkey, which are essential in advanced interviews. You should have a deeper understanding of critical concepts such as Just-In-Time (JIT) compilation, cross-browser compatibility, and the distinct characteristics of various JavaScript engines. ***What is a JavaScript engine, and name some popular ones?*** * JavaScript Engine: Converts JavaScript code to machine code. * Examples: V8 (Chrome), SpiderMonkey (Firefox), Chakra (Edge). ***How do different JavaScript engines impact cross-browser compatibility?*** Different JavaScript engines, like V8 in Chrome and SpiderMonkey in Firefox, lead to variations in JavaScript code interpretation and execution, affecting cross-browser compatibility. ***Discuss the role of Just-In-Time (JIT) compilation in JavaScript engines.*** JIT compilation in JavaScript engines compiles JavaScript code to machine code at runtime rather than ahead of time, optimizing for performance by compiling frequently executed code paths. ***Discuss the impact of JavaScript engine optimizations on web application performance.*** JavaScript engine optimizations, such as JIT compilation and efficient garbage collection, can significantly enhance web application performance. They improve the execution speed of scripts and manage memory more effectively, leading to faster, smoother, and more responsive web applications. ***How does the choice of the JavaScript engine affect the development of cross-platform JavaScript applications?*** The choice of a JavaScript engine can affect performance, compatibility, and the availability of certain features in cross-platform applications. Developers need to ensure their code runs efficiently and consistently across different engines, often requiring cross-browser testing and sometimes polyfills for compatibility. <a href="https://namecheap.pxf.io/c/3922519/1130469/5618" target="_blank" id="1130469">![Namecheap Domain Offer](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6sezdlfsob5c402h9e3w.png)</a> ***What role do JavaScript engines play in the execution of server-side JavaScript, such as Node.js?*** JavaScript engines like V8, used in Node.js, interpret and execute JavaScript code on the server side, providing the runtime environment and performance optimizations necessary for server-side applications. ***How have advancements in JavaScript engines influenced the evolution of JavaScript as a language?*** Advancements in JavaScript engines, such as improved JIT compilation and optimized garbage collection, have enabled more complex and performance-intensive applications to be built in JavaScript, influencing the development of more sophisticated features in the language itself. ***Explain how garbage collection in JavaScript engines can affect application performance.*** Garbage collection (GC) is freeing up memory that is no longer in use. While necessary, GC can affect application performance, particularly if it happens frequently or takes a long time to complete, as it temporarily pauses script execution. ***Describe the concept of ‘optimization killers’ in JavaScript engines.*** `Optimization killers` are certain code patterns or practices that prevent JavaScript engines from optimizing code execution. Examples include using the with statement, excessively large scripts, or changing the prototype of an object after its creation. Avoiding these practices can lead to more consistent performance across different JavaScript engines. ### 8\. JavaScript Interview Questions on Design Patterns ![Vibrant programming data analytics visual for JavaScript developers](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/nuz6sq5w4519t4soxk1s.png) This section covers JavaScript interview questions related to [design patterns](https://www.webdevstory.com/software-design-patterns/), an area that tests your ability to write maintainable and scalable code. ***Can you explain some common design patterns used in JavaScript?*** * **Module Pattern**: Encapsulates privacy, state, and organization using closures. * **Singleton Pattern**: Ensures a class has only one instance and provides a global point of access to it. * **Observer Pattern**: Allows an object (subject) to notify other objects (observers) when its state changes. ***How do import and export statements work in ES6 modules?*** ES6 Modules allow breaking code into reusable pieces, using import and export for sharing across files. ***What are the benefits of using module patterns in JavaScript?*** Module patterns offer encapsulation, namespace management, reusability, and dependency management. They help in organizing code, avoiding global scope pollution, and managing application complexity. ***Compare named exports and default exports in ES6 Modules. When should we use each of them?*** `Named exports` are useful for exporting several values, allowing them to be imported with the same names. Default exports are ideal for a module that exports a single value or function. `Named exports` improve code readability and maintenance, while default exports simplify the import process when a module exports a single entity. ***What are some common issues or challenges that arise when using JavaScript modules, and how can developers address them?*** Common issues include managing dependencies, dealing with different module formats, and handling browser compatibility. Resolve these issues by using tools like `Babel` to convert code, Webpack to bundle modules, and adhering to recommended methods for managing dependencies. ***How do dynamic imports work in JavaScript, and when would you use them?*** Dynamic imports in JavaScript use the `import()` function to load modules asynchronously. They are useful when you want to load modules on demand, like with code splitting, for better performance. ***How do module patterns support encapsulation in JavaScript?*** Module patterns in JavaScript support encapsulation by allowing developers to define private and public members. We achieved this by exposing only the functions and properties through an interface while keeping the rest of the module’s members hidden from the global scope. ***Compare the CommonJS module pattern with the ES6 module system.*** `CommonJS`, used predominantly in `Node.js`, allows modules to be loaded synchronously and does not require importing them. ES6 modules designed for asynchronous loading use the import and export syntax. They undergo static analysis, which enables tree shaking for enhanced performance optimizations. ### 9\. Efficient Memory Management in JavaScript ![Complex binary code on a monitor, related to JavaScript programming challenges](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/vs14gjksc2xlsfw6bkj6.png) Effective memory management is crucial in JavaScript. The questions in this section will test your knowledge of how JavaScript handles memory allocation and garbage collection. ***How does garbage collection work in JavaScript, and what are some best practices for memory management?*** * **Garbage Collection:** Automatic memory management where the engine frees up memory used by objects no longer in use. * **Best Practices:** Include avoiding global variables, carefully managing object lifecycles, and using weak references where appropriate. ***What is a memory leak in JavaScript, and how can it be prevented?*** A memory leak in JavaScript occurs when the application continuously uses more memory without releasing unused memory. Careful coding practices can prevent it, such as avoiding global variables, properly managing event listeners, and ensuring that it properly disposed large objects of when no longer needed. ***Describe how closures can impact memory management in JavaScript.*** Closures can lead to memory leaks if they keep references to large objects or DOM elements that are no longer needed. Since the closure keeps a reference to these objects, they prevent garbage collection. Developers need to ensure that closures only hold on to what is necessary and release references to objects that are no longer needed. ***Explain the concept of ‘garbage collection’ in JavaScript. How does it work?*** Garbage collection in JavaScript is the process of automatically finding and reclaiming memory that is no longer in use by the program. The JavaScript engine identifies variables and objects that are no longer reachable from the root and not used in the program and then frees up their memory. ***How can developers prevent memory leaks in JavaScript applications?*** To prevent memory leaks, developers should avoid unnecessary global variables, use event listeners judiciously, and ensure we remove them when not needed. They should also be cautious with closures that may inadvertently hold references to large objects and manage DOM references properly. ***Discuss the challenges of memory management in single-page applications (SPAs).*** In SPAs, memory management can be challenging because of the application’s long life cycle. As users navigate through the application, it’s crucial to ensure that memory is not being consumed unnecessarily. This includes managing event listeners, cleaning up after components are destroyed, and avoiding accumulations of data in the state that are no longer needed. ***What role do memory profiles and heap snapshots play in managing memory in JavaScript?*** Memory profiles and heap snapshots are tools used to debug memory issues. They provide insights into the memory usage of a JavaScript application, helping to identify memory leaks and inefficient memory usage. Developers can analyze these snapshots to identify which objects consume the most memory and understand how they keep it. ***Describe how weak references introduced in ES6 with WeakMap and WeakSet help with memory management.*** `WeakMap` and `WeakSet` are collections that hold weak references to their elements, meaning the references to objects in these collections do not prevent garbage collection if there are no other references to the objects. Use this feature to create mappings to large objects, allowing their garbage collection when not in use elsewhere. ### 10\. JavaScript Interview Questions on Frameworks ![REACT spelled with blocks, foundational for JavaScript UI development](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/yq62uzvgpodh9e7lej2u.png) Each JavaScript interview question in this section focuses on the trickier aspects of the language, challenging you to show both your technical knowledge and problem-solving skills. ***Discuss the impact of modern JavaScript frameworks (like React, Angular, and Vue) on web development.*** These frameworks have significantly influenced the structure, performance, and scalability of web applications, introducing concepts like component-based architecture and reactive programming. ***How have modern JavaScript frameworks influenced web development practices?*** Modern JavaScript frameworks have revolutionized web development by introducing new concepts like component-based architecture, virtual DOM, reactive data binding, and single-page applications (SPAs). They have made it easier to build complex, scalable, and performant web applications while also improving developer productivity through reusable components and streamlined development processes. ***Discuss the concept of server-side rendering (SSR) in modern JavaScript frameworks and its benefits.*** Server-side rendering (SSR) in modern JavaScript frameworks refers to rendering components on the server and sending the resulting HTML to the client. Benefits include faster initial page load times, improved SEO, and better performance on devices with limited computing power. Frameworks like `Next.js (React)` and `Nuxt.js (Vue)` provide easy-to-use SSR capabilities. ***Compare and contrast React, Angular, and Vue in terms of their core philosophies and use cases.*** React focuses on a component-based approach with a large ecosystem suitable for flexible and scalable applications. Angular offers a full-fledged MVC framework, providing a more opinionated structure ideal for enterprise-level applications. Vue combines ideas from both React and Angular, offers an easy learning curve, and is great for both small and large-scale applications. ***How do state management patterns in modern JavaScript frameworks enhance application scalability and maintainability?*** State management patterns, such as `Redux` in `React` or `Vuex` in `Vue`, provide a centralized store for all the components in an application. This approach makes it easier to manage the state, especially in large applications, leading to more predictable data flow, easier debugging, and better maintainability. ***Discuss the role of component lifecycle methods in frameworks like React or Vue. How do they contribute to the application’s behavior?*** Component lifecycle methods in frameworks like `React` and `Vue` provide hooks that allow developers to run code at specific points in a component's life, such as creation, updating, or destruction. These methods are crucial for tasks like making API calls, setting up subscriptions or timers, and optimizing performance through updates. ***Explain the significance of Virtual DOM in frameworks like React. How does it improve the application’s performance?*** The Virtual DOM is an abstraction of the actual DOM, allowing frameworks like React to change the real DOM. This leads to improved performance, as updating the real DOM is often the bottleneck in web application performance. The Virtual DOM enables efficient diffing algorithms to update only what’s necessary, reducing the time and resources needed for DOM manipulations. ***What is the importance of component-based architecture in modern web development?*** Component-based architecture allows developers to build applications with reusable, isolated, and modular components. This approach makes development, testing, and maintenance easier, as developers can develop and test each component in isolation before integrating it into the larger application. It also promotes reusability and consistency across different parts of the application. ### 11\. Tricky JavaScript Interview Questions ![Blurry JavaScript code, representing the need for clarity in interviews](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ekc9zrf53l2jfoitaklp.png) Each JavaScript interview question in this section focuses on the trickier aspects of the language, challenging you to show both your technical knowledge and problem-solving skills. ***Explain the type of coercion and its peculiarities.*** Type coercion involves the implicit conversion of values from one type to another, often leading to unexpected results. ***Discuss precision issues in JavaScript, especially with floating-point arithmetic.*** Precision Issues floating-point arithmetic can lead to precision loss, clear in operations like `0.1 + 0.2`. ***Why does 0.1 + 0.2 !== 0.3 in JavaScript, and what does it illustrate about JS numbers?*** Because of floating-point arithmetic, `0.1 + 0.2` results in a value slightly over `0.3`. This highlights issues with precision in JavaScript's number representation. ***Predict the output of console.log(\[\] + \[\]); and explain why.*** Outputs an empty string `""`. When coercing arrays to strings, both arrays become empty strings, and concatenating them results in another empty string. ***What does the following function return and why?*** ```javascript function getNumber() { return { value: 23 }; } console.log(getNumber()); ``` Returns undefined because of Automatic Semicolon Insertion, which inserts a semicolon after return, causing the function to return nothing. ***What will the following code output be and why?*** ```javascript var x = 21; var girl = function () { console.log(x); var x = 20; }; girl(); ``` Outputs undefined. Variable x is hoisted within the function girl, so console.log(x) is executed before x is assigned 20. ***Given the following code, what gets logged and why?*** ```javascript var a = 5; (function() { var a = 6; console.log(a); })(); console.log(a); ``` `Logs 6 and 5`. The IIFE creates its scope, so the first `console.log` refers to the `inner a`, while the second console.log refers to the `outer a`. ***How does JavaScript handle type coercion in comparison operations? Give an example.*** JavaScript converts types to match each other when performing comparison operations. For example, in `"5" == 5`, the string `"5"` is coerced to the number `5` before comparison, making the statement true. ***Explain how precision issues in floating-point arithmetic can affect calculations in JavaScript. Provide an example.*** JavaScript uses a double-precision floating-point format for numbers, leading to precision issues in arithmetic calculations. For example, `0.1 + 0.2` results in `0.30000000000000004` instead of `0.3`, which can lead to unexpected results in financial calculations. ### Conclusion JavaScript’s growing nature makes it an exciting and challenging language. This guide covers a wide range of JavaScript interview question, from basic syntax to complex concepts, preparing you for various interview scenarios. Remember, understanding these concepts is about grasping the language’s essence, not just memorization. To further enhance your journey towards becoming an expert, explore our detailed resources on [becoming an expert programmer](https://www.mmainulhasan.com/becoming-a-competent-programmer/). Continuous practice and learning are key to mastering any programming language. Good luck with your interviews, and happy coding! 🚀 **Elevate Your Coding Interviews - Engage & Share!** ❤️ & 🦄 If you’ve gained insight, please ‘like’ and bookmark this guide for your interview prep 💬 Got a tip or question? Join the conversation below—your input enriches us all 🔄 Share this guide with peers to spread the knowledge! Your engagement fuels our collective growth and strengthens our developer community. 🌟 Thanks for being a key part of this journey! Each interaction here is a step toward shared success. Your likes, comments, and shares create a vibrant learning ecosystem. Keep connecting and sharing! ***Support Our Tech Insights*** If you've enjoyed our work and gained valuable insights, we'd be grateful for your support. Your contributions help us continue to deliver quality content. If you're interested, here's how you can support us: <a href="https://www.buymeacoffee.com/mmainulhasan" target="_blank">![Buy Me A Coffee](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lsm9uucbnw7x9iw0loxr.png)</a> <a href="https://www.paypal.com/donate/?hosted_button_id=GDUQRAJZM3UR8" target="_blank">![Donate via PayPal button](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ipnhbim2ba56kt32zhn3.png)</a> Note: Some links on this page might be affiliate links. If you make a purchase through these links, I may earn a small commission at no extra cost to you. Thanks for your support! ## Read Next... {% embed https://dev.to/mmainulhasan/the-definitive-programming-roadmap-from-novice-to-expert-e1k %} {% embed https://dev.to/mmainulhasan/7-advanced-css-selectors-you-should-know-70g %} {%embed https://dev.to/mmainulhasan/debating-the-right-programming-language-focus-here-instead-2mka %}
mmainulhasan
1,724,011
What Happens If A Diesel Generator Is Left To Run Out Of Fuel
Diesel generators are the most important thing in the areas where electricity cutting and shortage...
0
2024-01-11T06:49:40
https://dev.to/techhubsd/what-happens-if-a-diesel-generator-is-left-to-run-out-of-fuel-2nn9
news
Diesel generators are the most important thing in the areas where electricity cutting and shortage are frequent. Every place needs to accommodate according to the problem they are facing and a diesel generator is the one that can help to solve problems like power cut out and power shortage. https://www.ablesales.com.au/ are the leading people providing the solution to the electric problem in Australia. They are working to provide a reliable source of energy which can be used in the long term. **Electricity crisis in Australia** The problem with electricity grade in Australia is very challenging and it is compromising the reliability and affordability with sustainable environmental processes. There are various issues that are causing the problem in the electrical system of the whole country. **Age-old infrastructure** The infrastructure of the Australian electricity system is very complex and it is not modernised with the technology. Most of the power grid and transformers are working over their life span which is a significant region of this high electrical problem in the country. We need to upgrade and make crucial changes to modernise the whole system. **Reliance on fossil fuel** The major source of electricity is none other than fossil fuel like coal which contributes more than half of the cool requirement. Although it is a cheaper resource, it is causing air pollution and greenhouse gas emissions. Moreover, this source is not as reliable as the other sources. If the country ran out of coal there would be no electricity. **Incorporating renewable energy is a challenging task** Although the countries are making progress in incorporating solar energy and wind energy into the power system, it is still challenging. The efficiency solutions by https://garpen.com.au/ that are essential for upgrading grids are the most important thing to think about. **Weather conditions** The most important hindrance is extreme weather conditions like floods, bushfires and heat waves are very common in this region and disrupt the power line. It requires constantly changing the power line and finding a solution to this use problem is still on the list. **Why is diesel preferred over petrol?** [Diesel fuel is often preferred over petrol](https://www.spinny.com/blog/index.php/petrol-vs-diesel-efficiency-and-cost-of-running/) for several reasons, but it's not always the clear-cut winner. Here's a breakdown of their pros and cons to help you decide which is better for your needs: **Diesel Advantages:** **Fuel Efficiency:** Diesel engines extract more energy from each gallon of fuel compared to petrol engines, resulting in 20-30% better fuel economy on average. This translates to significant cost savings, especially for high-mileage drivers. **Torque and Power:** Diesel engines typically generate more torque at lower RPMs, making them feel more powerful and responsive, especially for hauling heavy loads or towing. This is particularly advantageous for trucks, SUVs, and commercial vehicles. **Durability:** Diesel engines are generally built sturdier and can last longer than petrol engines, especially with proper maintenance. This can make them a more cost-effective choice in the long run, despite their higher initial purchase price. **Lower Emissions:** Modern diesel engines emit less carbon dioxide (CO2) per unit of energy compared to petrol engines. However, they tend to emit higher levels of nitrogen oxides (NOx) and particulate matter (PM), which require advanced emission control systems to comply with regulations. **Petrol Advantages:** **Lower Cost:** Petrol cars are generally cheaper to buy upfront compared to their diesel counterparts. This is important for the people who are buying under the budget **Smoother and Quieter:** Petrol engines tend to operate smoother and quieter than diesel engines, providing a more refined driving experience. **Wider Availability:** Petrol is generally more readily available than diesel, especially in remote areas or developing countries. This can be important for drivers who travel frequently or live outside of major urban centres. **Faster Refueling:** Petrol pumps typically deliver fuel faster than diesel pumps, resulting in shorter refueling times. **So, which is better?** If you prioritize fuel efficiency, torque, and durability, and plan on driving many miles, diesel might be the better choice. However, if you're on a tight budget, prefer a smoother and quieter driving experience, or need wider fuel availability, petrol might be a better fit. Driving habits: If you mostly drive short distances in stop-and-go traffic, a petrol engine might be more efficient. Maintenance costs: Diesel engines generally have higher maintenance costs than petrol engines. Environmental impact: While modern diesel engines have improved emissions, petrol engines still have a lower overall environmental impact. **What happens when the diesel generator is left running out of fuel?** For running any machinery it is important to have a good amount of fuel but if you are left running without the fuel it can cause a breakdown in the system. The supply can dry out and the engine will lose it power and sputter which can cause stalling of the whole system. Without fuel, the Indian will also fail to cool down and overheat which can cause internal damage. It can also back feed the grade which can be a safety hazard to the worker along with damage to the generator. It would damage the fuel system like pump and injector because they won't be lubricated without fuel and require replacement. The battery will also run out if the circuit is alive and it cannot be used without recharge and sometimes it can cause permanent damage to the battery. Another major problem will be the build up of carbon which can be the result of incomplete conversion in the engine to reduce the efficiency and performance. **Conclusion** Ultimately, the best way to decide is to research specific car models and consider your individual needs and preferences. Take test drives of both petrol and diesel vehicles to see which one you prefer. It is important to use your genset with fuel unless it will cause serious damage and long-term problems. You should always monitor the few levels and refuel as soon as you see the changes in level to avoid any issues.
techhubsd
1,724,026
Top Fantasy Football Draught Apps for 2024
Introduction The anticipation is palpable as the 2023–2024 fantasy football season approaches. As...
0
2024-01-11T07:04:12
https://dev.to/agnitotechnologies1/top-fantasy-football-draught-apps-for-2024-1jbi
fantasy, footballapp
Introduction The anticipation is palpable as the 2023–2024 fantasy football season approaches. As avid fantasy football enthusiasts gear up to draft their dream teams, the choice of the right draught app becomes paramount. To aid in your quest for fantasy football glory, we've compiled a list of the top fantasy football draught apps for the upcoming season. These apps, provided by leading [Fantasy Football Software Providers](https://agnitotechnologies.com/fantasy-football-app-development-company/), offer an array of features and tools to help you craft a winning roster and elevate your fantasy football experience to new heights. Get ready to dive into the world of fantasy football with these cutting-edge draught apps. ESPN Fantasy Football ESPN Fantasy Football is a perennial favorite among fantasy football enthusiasts. Known for its user-friendly interface and comprehensive features, it provides a seamless platform for the 2023–2024 Fantasy Football Draft. With ESPN's vast network of sports experts and real-time analysis, you can make informed draft choices. Customizable draft settings, mock drafts, and expert advice give you the edge in building your ideal roster. Plus, it offers a mobile app for drafting on the go. ESPN Fantasy Football is a trusted companion for the upcoming season, helping you draft with confidence and set your sights on victory. MyFantasyLeague For serious fantasy football managers, MyFantasyLeague is a top pick for the 2023-2024 Fantasy Football Draft. This platform offers a highly customizable and flexible drafting experience. Whether you're into standard drafts, auction drafts, or dynasty leagues, MyFantasyLeague has you covered. It boasts in-depth statistical analysis, a user-friendly interface, and robust commissioner tools. What sets it apart is the ability to import and analyze data from past seasons, giving you a competitive advantage. With MyFantasyLeague, you can tailor your draft strategy and dominate your league like a true [fantasy football](https://agnitotechnologies.com/machine-learning-ai-fantasy-football/) aficionado. Fleaflicker Fleaflicker is a hidden gem for fantasy football enthusiasts gearing up for the 2023-2024 season. This platform combines a clean interface with a wealth of features that make drafting a breeze. Fleaflicker offers customizable scoring options, including IDP (Individual Defensive Players) leagues, and supports multiple draft formats. With its user-friendly design, you can easily navigate through drafts, conduct mock drafts, and access real-time player updates. The mobile app ensures you stay connected to your draft from anywhere. Fleaflicker's commitment to user satisfaction makes it a compelling choice to enhance your fantasy football experience in the upcoming season. CBS Sports Fantasy Football When it comes to fantasy football, CBS Sports Fantasy Football is a name you can trust. It offers a comprehensive platform for the 2023-2024 season with a plethora of tools and resources. The CBS Sports expert analysis provides invaluable insights to aid your draft strategy. The app's user-friendly interface ensures a smooth drafting experience, whether you're a rookie or a seasoned manager. With customizable draft settings and real-time player updates, you'll have the upper hand in building a championship-caliber team. CBS Sports Fantasy Football is your ticket to a successful fantasy football journey this season. NFL Fantasy Football For many, the NFL is the epitome of American football, and their fantasy football app lives up to that legacy. With NFL Fantasy Football, you're in the big leagues of fantasy sports. It offers an array of features, including mock drafts, expert analysis, and customizable scoring settings. Real-time player updates and injury reports keep you informed during your draft. The official NFL platform also integrates seamlessly with NFL.com, offering a one-stop-shop for all your fantasy football needs. As one of the best fantasy football apps, NFL Fantasy Football promises a thrilling and competitive 2023-2024 seasons. Sleeper Sleeper is the ultimate game-changer for the 2023-2024 Fantasy Football Draft. It's designed to provide a unique and engaging drafting experience for football fanatics. With real-time chat, instant trade alerts, and innovative features like "Rookie Pick Trading," Sleeper fosters a sense of community among league members. It offers in-depth player stats, expert analysis, and customizable draft settings to cater to various league formats. Sleeper's mobile app ensures that you stay connected to your draft wherever you are. If you're looking for a fresh take on the fantasy football draft, Sleeper is your go-to platform for the upcoming season. Yahoo Fantasy Sports Yahoo Fantasy Sports has been a trusted name in fantasy football for years, and the 2023-2024 season is no exception. With a user-friendly interface and a wealth of features, it's a top choice for fantasy football enthusiasts. Yahoo offers mock drafts, customizable scoring settings, and expert analysis to help you prepare for your draft. The mobile app, designed by the leading Fantasy Football Software Provider, ensures you're always in the game. Plus, the Yahoo Sports team provides real-time player updates and insights. When it comes to reliable and comprehensive fantasy football platforms, Yahoo Fantasy Sports continues to set the standard for the upcoming season. Conclusion As the excitement builds for the 2023-2024 fantasy football season, the choice of your draft app can significantly impact your success. Whether you opt for the familiarity of ESPN, the customization of MyFantasyLeague, the hidden gem Fleaflicker, or any of the other top contenders like CBS Sports, NFL Fantasy Football, Sleeper, or Yahoo Fantasy Sports, each platform offers a unique experience. Consider your league's preferences, your draft strategy, and the user interface that resonates with you. The best fantasy football draft app is the one that aligns with your needs and helps you build a winning team. So, draft wisely and may your fantasy football dreams come true in the upcoming season!
agnitotechnologies1
1,724,083
How to Choosing The Right SEO Services For You in 2024
In the ever-evolving digital landscape of 2024, the importance of Search Engine Optimization (SEO)...
0
2024-01-11T07:30:19
https://dev.to/ojasvi/how-to-choosing-the-right-seo-services-for-you-in-2024-24j2
seo, searchengine, webdev
In the ever-evolving digital landscape of 2024, the importance of Search Engine Optimization (SEO) cannot be overstated. With businesses vying for online visibility and a higher ranking on search engine results pages (SERPs), selecting the Right SEO services is crucial for success. This guide will walk you through key considerations to ensure you make an informed decision that aligns with your business goals. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7i7sxce06y4r1bzncs27.png) SEO Company in USA — https://www.time4servers.com/seo-company-in-india.html Understanding Your Business Needs: Before diving into the plethora of [SEO services](https://www.time4servers.com/seo-company-dubai.html) available, it’s essential to have a clear understanding of your business objectives. Different businesses have unique goals, and your SEO strategy should be tailored to meet those specific needs. Whether you aim to increase website traffic, improve lead generation, or boost online sales, a targeted approach willhelp in selecting the most relevant SEO services. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/krvkkkdctgh2jqpbbjy7.jpg) Comprehensive Keyword Research: Keywords are the foundation of any successful SEO strategy. In 2024, search algorithms are more sophisticated than ever, making comprehensive keyword research a critical aspect of your SEO plan. Identify industry-specific keywords and long-tail phrases that align with your products or services. An effective SEO service provider will conduct in-depth keyword research to ensure your content is optimized for relevant and high-traffic terms. SEO Company in Dubai- https://www.time4servers.com/seo-company-dubai.html On-Page and Off-Page Optimization: A reputable SEO service should offer a balanced approach to on-page and off-page optimization. On-page optimization involves fine-tuning elements on your website, such as meta tags, headers, and content, to make it more search engine-friendly. Off-page optimization, on the other hand, focuses on building quality backlinks and enhancing your website’s authority in the eyes of search engines. Ensure that the SEO services you choose encompass both aspects for a holistic strategy. Content Quality and Relevance: Content remains king in the realm of SEO. High-quality, relevant, and engaging content not only attracts visitors but also signals to search engines that your website is a valuable resource. Your chosen SEO services should include content creation and optimization, ensuring that your website offers value to both users and search engines. Look for a provider that emphasizes the importance of fresh, informative, and shareable content to keep your audience engaged. Local SEO Focus: For businesses targeting a local audience, prioritizing local SEO is imperative. With the increasing use of location-based searches, optimizing your online presence for local searches can significantly boost your visibility. Look for SEO services that specialize in local optimization, including creating and optimizing Google My Business profiles, local citation building, and geographically targeted keyword optimization. Transparent Reporting and Analytics: Transparency is key when it comes to evaluating the effectiveness of your SEO strategy. A reliable SEO services provider should offer regular and detailed reporting on key performance metrics. This includes tracking changes in search rankings, website traffic, conversion rates, and other relevant KPIs. Utilizing analytics tools, such as Google Analytics, should be an integral part of their approach, enabling you to measure the impact of their efforts on your business goals. Adaptability to Algorithm Updates: Search engine algorithms are in a constant state of evolution. Google, in particular, regularly updates its algorithms to enhance user experience and weed out manipulative tactics. Your chosen SEO services should demonstrate the ability to adapt to these changes seamlessly. Inquire about their strategies for staying abreast of algorithm updates and how they adjust their tactics to ensure sustained performance. Conclusion: Choosing the right Tips to SEO services for your business in 2024 requires a strategic approach that aligns with your unique goals and adapts to the dynamic digital landscape. By prioritizing comprehensive keyword research, on-page and off-page optimization, quality content creation, technical SEO expertise, local SEO focus, transparent reporting, and adaptability to algorithm updates, you can make an informed decision that propels your business to new heights of online success. Remember, investing in the right SEO services is an investment in the long-term growth and visibility of your business in the digital world.
ojasvi
1,724,088
natural herbs
All type of pure herbs are available
0
2024-01-11T07:33:19
https://dev.to/newpansari/natural-herbs-2cn6
_All type of pure herbs are available_
newpansari
1,724,248
DevOps Certification Online
In the dynamic landscape of IT and software development, the adoption of DevOps practices has become...
0
2024-01-11T10:39:24
https://dev.to/leoanthony/devops-certification-online-1a11
devops, azure, aws
In the dynamic landscape of IT and software development, the adoption of DevOps practices has become a driving force behind innovation and efficiency. As organizations increasingly prioritize this transformative approach, the demand for skilled DevOps professionals has surged. To validate and showcase your proficiency in DevOps, pursuing a DevOps certification online has become a strategic move for career advancement. In this blog post, we explore the significance of DevOps certification, the hands-on training it entails, and the diverse avenues available for online DevOps training. ## The Significance of DevOps Certification Online DevOps certification is more than just a credential; it is a validation of your expertise and commitment to mastering the DevOps discipline. In a competitive job market, certification serves as a tangible demonstration of your ability to implement DevOps practices effectively. **[Online DevOps certification](https://www.h2kinfosys.com/courses/devops-online-training-course/)** programs go beyond theoretical knowledge, emphasizing hands-on training to ensure that certified professionals are well-equipped to navigate real-world challenges. **AWS Certified DevOps Engineer:** Amazon Web Services (AWS) offers the AWS Certified DevOps Engineer certification, which focuses on hands-on experience with AWS tools and services. This certification validates your ability to design, manage, and implement continuous delivery systems on the AWS platform. **Microsoft Certified: Azure DevOps Engineer Expert:** For those entrenched in the Microsoft ecosystem, the Microsoft Certified: Azure DevOps Engineer Expert certification is a valuable recognition. This certification assesses your proficiency in implementing DevOps practices using Microsoft Azure, with hands-on labs to reinforce your practical skills. **Docker Certified Associate:** As containerization gains prominence, the Docker Certified Associate certification has become a sought-after credential. This certification evaluates your competency in working with Docker containers and includes hands-on tasks to validate your practical expertise. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/69b41h24deb3wrzbnc3o.png) ## Hands-On DevOps Training: A Core Component of Certification One of the distinguishing features of quality **[DevOps certification](https://www.h2kinfosys.com/courses/devops-online-training-course/)** programs is the inclusion of hands-on training. While theoretical knowledge is essential, the ability to apply DevOps principles in real-world scenarios is what sets certified professionals apart. Hands-on training provides an immersive learning experience, allowing candidates to gain practical insights into the tools, processes, and collaborative practices integral to successful DevOps implementations. ## Choosing the Right Online Course for DevOps Training Selecting the right online course is crucial for a successful DevOps training journey. Consider the following factors when making your decision: **Hands-On Labs and Projects:** Look for courses that prioritize hands-on labs and real-world projects. Practical application is essential for reinforcing theoretical concepts. **Instructor Expertise:** Courses led by industry experts bring valuable insights and real-world experiences to the learning environment. Verify the credentials and expertise of the course instructors. **Tool Coverage:** Ensure that the course covers a diverse range of DevOps tools. Proficiency in popular tools is vital for a well-rounded DevOps skill set. **Community Support:** Courses with an active community or support system provide opportunities for collaboration and problem-solving. Being part of a community enhances the learning experience. **Conclusion:** Transform Your Career with DevOps Certification In the competitive landscape of IT, a DevOps certification online has become a beacon for career advancement. It not only validates your expertise but also equips you with practical skills through hands-on training. Whether you choose a certification program from AWS, Microsoft, or Docker, or opt for online courses on platforms like Udemy, [h2kinfosys](https://www.h2kinfosys.com/), Pluralsight, or LinkedIn Learning, the key lies in combining theoretical knowledge with practical application. Elevate your career, embrace the transformative power of DevOps, and embark on a journey toward expertise that sets you apart in the evolving world of IT and software development.
leoanthony
1,724,256
Navigating the DevOps Landscape: A Comprehensive Insight into the Modern Software Development Role
In the rapidly evolving world of software development, DevOps professionals stand at the forefront,...
0
2024-01-11T10:50:24
https://dev.to/annajade1234/navigating-the-devops-landscape-a-comprehensive-insight-into-the-modern-software-development-role-id2
In the rapidly evolving world of software development, DevOps professionals stand at the forefront, orchestrating seamless workflows and optimizing processes. The DevOps role is a dynamic and collaborative journey that demands a unique blend of technical prowess and interpersonal skills. This exploration delves into the key facets that define the realities of a DevOps role, shedding light on the intricacies and significance of this pivotal position in the software development lifecycle. For individuals seeking to validate their proficiency in DevOps practices and enhance their career prospects, pursuing **[DevOps Training in Hyderabad](https://www.acte.in/devops-training-in-hyderabad)** becomes a strategic imperative. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/32pahqnprb6dzcmq1f88.png) **Key Facets of a DevOps Role:** Fostering Collaboration: DevOps thrives on collaboration, requiring professionals to work closely with development, operations, and cross-functional teams. Effective communication and teamwork are foundational to achieving a harmonious and efficient workflow. Comprehensive Ownership: DevOps practitioners assume end-to-end responsibility for the software delivery pipeline, overseeing coding, testing, deployment, and monitoring. This holistic approach facilitates quicker responses to changes and issues, leading to faster and more reliable software delivery. Empowering Automation and IaC: Automation is central to DevOps practices, enabling the streamlining of repetitive tasks for increased efficiency and reliability. Infrastructure as Code (IaC) further enhances scalability and consistency in infrastructure management, contributing to streamlined deployment processes. Mastery of CI/CD Practices: Continuous Integration (CI) and Continuous Deployment (CD) practices are integral to DevOps, emphasizing regular integration, testing, and automated deployment. This results in a swift and reliable delivery pipeline that catches and addresses issues early in the development process. If you’re looking to truly understand and harness the potential of DevOps, considering **[Best DevOps Online Training](https://www.acte.in/devops-online-training)** becomes pivotal. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/kafsaukstw4lxk3434xx.png) Vigilant Monitoring and Analytics: DevOps professionals prioritize monitoring and analytics to gain insights into applications and infrastructure. Utilizing specialized tools, they track performance metrics, enabling proactive issue resolution and optimization, thereby contributing to overall system health. Holistic DevSecOps Approach: Security is seamlessly integrated into DevOps through the DevSecOps approach. Collaboration with security teams ensures that security measures are embedded into every stage of the software development lifecycle, minimizing vulnerabilities and enhancing system security. Adaptability as a Cornerstone: DevOps is a continuously evolving field, demanding adaptability from its professionals. Staying updated on emerging technologies, industry best practices, and evolving methodologies is essential for success in this dynamic environment. Problem-Solving Focus: DevOps professionals excel in problem-solving, adept at troubleshooting issues, identifying root causes, and implementing solutions to enhance system reliability and performance. This proactive problem-solving approach contributes to a resilient and robust software infrastructure. On-Call Commitments: Depending on the organization, DevOps professionals may be part of an on-call rotation. This involves addressing operational issues outside regular working hours, emphasizing the commitment to maintaining system stability and availability. Continuous Learning Journey: DevOps is a continuous learning journey. DevOps professionals engage in ongoing skill development, participate in conferences, and connect with the broader DevOps community to stay informed about the latest trends and innovations. This commitment to learning ensures that they remain at the forefront of the rapidly evolving technology landscape. Conclusion: In conclusion, a DevOps role is a captivating journey into the heart of modern software development practices. It demands a combination of technical acumen, collaboration, adaptability, and an unwavering commitment to innovation. DevOps professionals play a pivotal role in shaping the efficient and reliable delivery of cutting-edge software solutions, driving the industry forward with their expertise and dedication.
annajade1234
1,724,386
Unleashing the Power of SEO in Lichfield: Choosing the Right SEO Company
In the digital age, establishing a strong online presence is crucial for businesses of all sizes. As...
0
2024-01-11T12:37:02
https://dev.to/nautilusmarketing/unleashing-the-power-of-seo-in-lichfield-choosing-the-right-seo-company-cn3
In the digital age, establishing a strong online presence is crucial for businesses of all sizes. As more consumers turn to the internet to discover products and services, the importance of search engine optimization (SEO) cannot be overstated. For businesses in Lichfield, a thriving city with a rich history and a growing economy, finding the right SEO company is key to unlocking the full potential of their online presence. **The Role of SEO in Business Success**: SEO is the practice of optimizing a website to improve its visibility on search engines like Google. When potential customers search for products or services related to a business, a well-optimized website is more likely to appear at the top of search results. This increased visibility not only drives more traffic to the website but also enhances credibility and trust among potential customers. **The Lichfield Advantage**: Lichfield, nestled in the heart of Staffordshire, is a city known for its historic charm and modern amenities. As local businesses aim to cater to both residents and visitors, optimizing their online presence becomes crucial. A well-executed SEO strategy can significantly impact a business's success by connecting them with their target audience in Lichfield and beyond. **Choosing the Right SEO Company in Lichfield**: Selecting the right SEO company is a critical decision that can make or break the success of an online strategy. Here are key factors to consider when choosing an [SEO company in Lichfield](https://nautilusmarketing.co.uk/lichfield-seo/): **Local Expertise**: Look for an SEO company that understands the local market in Lichfield. Local expertise ensures that the SEO strategies implemented align with the specific needs and preferences of the target audience in the area. **Proven Track Record**: Investigate the company's track record. A reputable SEO firm should be able to provide case studies or examples of past successes, demonstrating their ability to improve the online visibility and rankings of their clients. **Customized Strategies**: A one-size-fits-all approach does not work in SEO. The chosen company should be willing to create customized strategies based on the unique goals and challenges of each business. This may involve keyword optimization, content creation, link building, and other proven SEO techniques. **Transparency and Communication**: Clear communication is essential throughout the SEO process. Choose a company that is transparent about its methods, provides regular updates, and is responsive to your inquiries. Understanding the progress of the SEO campaign helps build trust between the business owner and the SEO professionals. **Comprehensive Services**: SEO is not a standalone service; it is part of a broader digital marketing strategy. Look for a company that offers a range of digital marketing services, including social media management, content marketing, and website development, to ensure a holistic approach to online success. **Conclusion**: In the competitive digital landscape of Lichfield, investing in a reputable SEO company is a strategic move for businesses aiming to thrive online. By carefully considering factors such as local expertise, track record, customized strategies, transparency, and comprehensive services, businesses in Lichfield can enhance their online presence and connect with their target audience effectively. As the city continues to grow and evolve, a strong online presence will play a pivotal role in the success of local businesses.
nautilusmarketing
1,724,406
Vonage Developer Newsletter - December 2023
Hi! Welcome to our December Newsletter! This month marks an exciting period with the launch of our...
0
2024-01-11T13:01:30
https://dev.to/vonagedev/vonage-developer-newsletter-december-2023-1gpj
api, vonage, tutorial, news
Hi! Welcome to our December Newsletter! This month marks an exciting period with the launch of our newest product: Conversations for Salesforce. Enjoy the latest updates and tutorials on all things Vonage Communications APIs. Get ready for an exciting 2024 with events kicking off in January. We hope this season brings you joy and success. Happy Holidays! The Vonage Developer Relations Team 💜 **ANNOUNCEMENTS** **[Conversations for Salesforce - General Availability Announcement](https://developer.vonage.com/en/blog/conversations-for-salesforce-general-availability-announcement)** Conversations for Salesforce (CSF) is now Generally Available on Salesforce AppExchange. Learn more from DK Sah, a senior product manager, who explains how CSF transforms customer interactions by enabling seamless SMS, MMS, WhatsApp, and two-way conversations directly within Sales & Service Cloud. **TUTORIALS** **[The Monad Invasion - Part 1: What's a Monad?](https://developer.vonage.com/en/blog/he-monad-invasion-part-1-whats-a-monad)** Are you exploring the complex world of Monads? Developers, fear not: Guillaume Faas demystifies this complex concept. Explore some real-life Vonage .NET SDK examples that demonstrate Category Theory. **[Integrate Vonage With Grafana to Receive Notifications by SMS](https://developer.vonage.com/en/blog/integrate-vonage-with-grafana-to-receive-notifications-by-sms)** Unlock the power of Grafana with Vonage SMS notifications. Hear from Jekayinoluwa Olabemiwo, who shows how to seamlessly integrate Vonage Messages API into your Grafana set-up using Python. This is a great read for DevOps, SREs, and admins relying on Grafana for real-time system insights. **[Build Messaging Applications Faster](https://developer.vonage.com/en/blog/build-messaging-applications-faster)** Have you heard of Vonage Cloud Runtime (VCR)? VCR is our cloud-native, serverless development platform to speed up your development. Read more from Michael Crump about how VCR’s pre-built solutions in Code Hub can get you up-and-running quickly. **[How to Send Google Calendar SMS Reminders with Zapier](https://developer.vonage.com/en/blog/how-to-send-sms-reminders-of-google-calendar-events-with-zapier-dr)** Struggling to keep up with Google Calendar events? Missing reminders? No worries! Benjamin Aronov shows you how to set up SMS notifications for your calendar using Vonage and Zapier — without a single line of code. **[Enhance Video Conferencing with ChatGPT: Meet Your Live AI Assistant](https://developer.vonage.com/en/blog/enhance-video-conferencing-with-chatgpt-meet-your-live-ai-assistant)** Want to build your own GenAI assistant who can answer questions, transcribe notes, and summarize video meetings all in real-time? Get the scoop from Hamza Nasir. [See More](https://developer.vonage.com/en/blog/tutorials) **EVENTS** **[iOSCong SG](https://www.iosconf.sg/)** (Singapore, January 18-19) iOS Conf SG is the largest gathering of iOS and Apple developers in South East Asia. Vonage developer advocate Abdul Ajetunmobi will present a talk on “Touch - An Introduction to Interactive Widgets”. We hope to see you there! [See More](https://developer.vonage.com/en/community)
danielaf
1,724,860
Jam Aims to be a QA's Best Friend
(And a developers too) Jam is a browser extension that allows you to create the perfect bug report...
0
2024-01-11T22:06:35
https://www.bradbodine.dev/posts/jam-aims-to-be-a-qas-best-friend
webdev, qa, management
(And a developers too) Jam is a browser extension that allows you to create the perfect bug report in just one click. The goal of Jam is to make bug reporting faster for QA's, faster for developers, and overall pain and frustration free. Imagine how bugs are reported at a company. The bug finder, many times a QA, creates a ticket and tries to give you as much detail as they can on how they ended up seeing the bug that they found. Sometimes the details are slim because to the reporter, it looks pretty obvious that a developer would see the bug right away. But, many times, the developer will have a different result because they may have a different browser, a different system; there could be many reasons that the developer isn't seeing the bug that the reporter is. Many times the developer will says, "it works fine on my end!" and close the ticket, ultimately leaving the product with an error. With Jam, in one click, you can: take a screenshot, record a video, or capture an instant replay, and Jam will instantly generate a link to share with your team. The Jam link will also include all the technical diagnostic info your engineering team needs to quickly debug, such as network requests, console logs, device information, and even network speed! It's all captured so you no longer have to reproduce or find the technical diagnostics before reporting the bug. Once you have your Jam link, simply paste it into a ticket or chat message to share it with engineers, or connect Jam to your issue-tracking tool of choice to create tickets right from Jam. No account is required to view, so engineers can click the link, and at a glance see the bug and why it happened. Now the QA can capture a perfect reproduction. The developer has all the info they need to fix the bug faster, without a bunch of back and forth trying to figure out why they aren't seeing what the QA saw. {% embed https://youtu.be/omTQok6aJy0 %} Pain free, easy, simple, bug reporting. Jam is currently in beta and is [free to try](https://jam.dev/pricing/). Checkout [Jam.dev](https://jam.dev/docs)
bradbodine-dev
1,724,901
How to Delete Files in Python
Photo by Robert Linder on Unsplash Like, lots and lots of files... I work as a web developer...
0
2024-01-17T20:00:00
https://dev.to/margaretincali/how-to-delete-files-in-python-1g7b
python, programming, beginners
<figcaption>Photo by <a href="https://unsplash.com/@rwlinder?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Robert Linder</a> on <a href="https://unsplash.com/photos/a-row-of-trash-cans-sitting-next-to-a-brick-wall-JLoE-DntHtY?utm_content=creditCopyText&utm_medium=referral&utm_source=unsplash">Unsplash</a> </figcaption> Like, ***lots*** and lots of files... --- I work as a web developer for a CRM that caters primarily to automotive dealerships. We receive thousands of customer leads each day from dozens of advertising sources. We've recently realized that we have grown to such a size, that having thousands and thousands of email copies hanging out on our servers was starting to drag down on some performance times. So we decided to delete all emailed leads older than two weeks. I volunteered to complete this task twice monthly. Friends, I was doing this manually. In other words, I would highlight 5,000 or so emails in the file window at a time and hit `DELETE`. And then I would have to delete them again by emptying the recycle bin. This task would often take me an hour to complete. ![Michael Scott from 'The Office' facepalming](https://media.giphy.com/media/v1.Y2lkPTc5MGI3NjExd2pmaGM1aGt0Y25kbWlscGZmajNmMnlvZ3czYW91NWFxd3YybmoxMiZlcD12MV9pbnRlcm5hbF9naWZfYnlfaWQmY3Q9Zw/whQCarjn5Jv1Ktq2HH/giphy.gif) One day, I asked myself, "Can I write a program that can do this for me while I ... *not* do this??" Fortunately, the answer was yes. --- There are several ways to safely and quickly delete files in Python. ####1. Using os.remove() The <a href="https://docs.python.org/3/library/os.html" target="_blank">OS module</a> contains a few methods that can help you delete a file (or many files). The `os.remove()` function permanently deletes a file from the server. ``` import os os.remove('name_of_file.txt') ``` It's always good practice to check if the file exists before trying to delete it. Otherwise, you'll run into a `FileNotFoundError` exception. ``` import os my_file = 'name_of_file.txt' ## Make sure file exists if os.path.exists(my_file): os.remove(my_file) ``` ####2. Using os.unlink() Similarly to `os.remove()`, `os.unlink()` removes one file permanently. Checking that the file exists before trying to delete it is important here, as well. ``` import os file_path = '/path_to_my_file' if os.path.exists(file_path): os.unlink(file_path) ``` Note that `os.unlink()` is only available in Unix-based systems like macOS and Linux. It may not work in Windows systems. ####3. Deleting multiple files at once If you have more than one file to delete (i.e., a proverbial boatload), Python offers a few options. The `os.listdir()` function retrieves all files and directories within a specified directory. We can then manipulate that list and iterate over each file and delete. ``` import os file_path = '/path_to_my_file' for file in os.listdir(file_path): if os.path.isfile(os.path.join('file_path', file)): os.remove(os.path.join('file_path', file)) ``` Python's `glob` module is a little more robust, in that it finds files that match a specified pattern. For example, to remove all .png files from a folder that contains several different types of files, we could do this: ``` import os import glob file_path = '/path_to_my_file' pattern_match = '*.png' ## Get a list of files that match our desired pattern file_paths = glob.glob(os.path.join(file_path, pattern_match)) for file in file_paths: if os.path.exists(file): os.remove(file) ``` ####4. Deleting directories You might need to delete directories, in addition to individual files. Python can help with that, too. If the directory is empty, the `os.rmdir()` function can delete it. ``` import os dir_path = '/path_to_my_dir' os.rmdir(dir_path) ``` If the directory is not empty when calling the `os.rmdir()` function, an `OSError` will be raised. So placing the call to delete a directory within a try/except block can help with this. ``` import os dir_path = '/path_to_my_dir' try: os.rmdir(dir_path) except OSError as error: print (error) ``` If the directory you want to delete is not empty, Python's <a href="https://docs.python.org/3/library/shutil.html" target="_blank">shutil module</a> works well in this situation. The `shutil.rmtree()` function can delete an empty or non-empty directory and all of its contents. As with the `os.rmdir()` function, it's best to place calls to `shutil.rmtree()` within try/except blocks, to handle various errors. ``` import shutil dir_path = '/path_to_my_dir' try: shutil.rmtree(dir_path) except Exception as error: print (error) ``` <figcaption>With all of the above mentioned methods, please remember that these files and folders are deleted permanently. So back up your data or move them to a different location if you're not sure whether you will need these files in the future.</figcaption> --- Python has saved me hours of time each month, now that I can delete thousands of files while I complete other tasks. Hopefully, my painstaking experience can serve as a reminder to take a close look at your daily tasks and ask yourself which ones could be automated. You might be pleasantly surprised how much time and effort you can spend on more exciting things. ![Michael Scott from 'The Office' dancing happily](https://media.giphy.com/media/l0amJzVHIAfl7jMDos/giphy.gif) What tasks have you automated? Do you have a different preferred way to delete lots and lots of files that wasn't mentioned in this article? Let me know in the comments below!
margaretincali
1,724,918
TDD is cheaper
A nod to a recent article from Tim Ottinger with an additional conclusion
0
2024-01-11T23:47:00
https://dev.to/mbjelac/tdd-is-cheaper-3i6a
tdd
--- title: TDD is cheaper published: true description: A nod to a recent article from Tim Ottinger with an additional conclusion tags: TDD cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pjbdtumt89coqtshfpmh.png # Use a ratio of 100:42 for best results. published_at: 2024-01-11 23:47 +0000 --- Cover image: Scene from Modern Family, Phil: "Slow is smooth & smooth is fast!" Tim Ottinger posted this article: [Do They Hate Writing Tests? ](https://www.industriallogic.com/blog/do-they-hate-writing-tests/) As always, concise & to the point. I had the same experience with all 3 types of code: Old legacy, new legacy & test-driven. What I found very revealing in the article is this list of new-legacy activities: 1. Write a bunch of code (being very careful, of course) 1. Edit it until it will compile. 1. Debug it until it gets the right answers. 1. Look it over for other errors and correct them. 1. Send it off to code review Here is what I think the TDD list of activities looks like: 1. Test-drive the code (no need to be careful, tests watch your back) 1. ~~Edit it until it will compile.~~ *for TDD-ers working in compiled languages, compile fail is a failing test, so we don't have more than one line compile failures and we fix it immediately* 1. ~~Debug it until it gets the right answers.~~ *we test-drive, so bugs are few and far between (like, maybe 1 per month or less)* 1. ~~Look it over for other errors and correct them.~~ *no need no errors - all tests ever written are run often (faster ones more often) so nothing is broken for more than a couple of minutes* 1. Send it off to code review *... sure, we can do that (but check out [Refinement Code Review](https://martinfowler.com/bliki/RefinementCodeReview.html) by Martin Fowler)* Ah, sorry, I'll remove the superfluous items: 1. Test-drive the code 1. Send it off to code review Hmmm, this list seems smaller than the new-legacy list. Could it be that ~~writing code using TDD~~ designing your system driven by tests is more efficient, thus cheaper? Remember, the most expensive thing in IT is the developer hour. I've heard first-hand accounts of TDD projects being FOUR (4) times as efficient as non-TDD projects. Granted, TDD takes practice, but doesn't everything?
mbjelac
1,724,950
Machine Learning
JavaScript is a programming language that is heavily use in the tech industry. As the tech industry...
0
2024-01-12T00:42:17
https://dev.to/tiffanyman19/machine-learning-gci
machinelearning, ai, javascript, programming
JavaScript is a programming language that is heavily use in the tech industry. As the tech industry evolves, new trends emerge. One is artificial intelligence, or machine learning. Machine learning, an application of AI, is the process of using models of data to help a program/computer without direct instructions. That’s means either the program or the computer will mimic human intelligence to the best of its ability. Human intelligence is a mental quality that consists of various skills to adapt to their environment. From the ability to learn from experience to the ability to use their gain knowledge to manipulate their environment is what defined as human intelligence. When people are trying to mimic the human intelligence in artificial intelligence, developers must under a variety of expression and comprehend challenging concepts. To mimic the human mind, theoretical data models are created that are built for specific purposes. The data models are of the human brain that focuses on functions, vision, motion, sensory control, and learning. By understanding the cognitive functions of the human brain, reveals the complexity of complex processing capabilities. While artificial intelligence can do a wide range of tasks like a person, artificial intelligence is still lacking the “human factor”. There is scenarios that require human involvement. What is defined as “human factor” is emotion. The ability to interpret human emotions and facial expression. Human factors are not in the programming in the AI-enabled machine. AI can mimic human speech, but they lack human emotions, aka the human touch. But the machine can mimic the human to the best of its ability is by taking in data models, which will learn from the data. One way to generate theoretical data models is that software engineers use JavaScript to generate the data models. JavaScript can showcase the language capabilities in artificial intelligence development. One of the prime examples that proves the previous statement is web integration and accessibility. JavaScript is integrated into various websites that the developers add ai to assist the users with almost any problems or concerns that may arise. One example is Flatiron! Developers at Flatiron integrates ai into the student’s education. The AI is used to assist the students with any problem and generate the information that helps them solve the problem in almost record speed. While there are pros to having AI in our lives, there are also cons to having AI at our dispense. As there is a rise in what AI is used for, people take advantage of its ability for their own gain. AI programs like Chat GPT and use it to get answers to question or exercises and commits plagiarism. AI has been integrated int almost in every aspect of life, from sending grocery orders by a voice command to asking for assistance with a homework problem. The possibility with artificial intelligence is limitless with the power of mind.
tiffanyman19
1,724,985
Module Federation in Next.js 14
Introduction Next.js 14 brings to the table the innovative feature of Module Federation,...
0
2024-01-12T02:06:24
https://dev.to/lexyerresta/module-federation-in-nextjs-14-13il
webdev, javascript, react, nextjs
## Introduction Next.js 14 brings to the table the innovative feature of Module Federation, revolutionizing the way developers approach code sharing and the construction of microfrontend architectures. This article aims to dissect the concept of Module Federation within the Next.js framework, unraveling its impact on contemporary web development practices. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fomnldd50df15g26fdan.png) ## Understanding Module Federation Module Federation is a feature that fundamentally changes how front-end code is shared and managed: - **Concept and Functionality:** Module Federation allows separate front-end applications, or microfrontends, to dynamically share code and functionality at runtime. This is achieved without the need for duplicating shared modules across different parts of the application or multiple applications. - **Working in Next.js 14:** In the context of Next.js 14, Module Federation enables different Next.js applications to function cohesively, sharing components, libraries, and utilities seamlessly. This is particularly useful in scenarios where multiple teams work on different features of a large-scale application. ## Benefits of Module Federation Module Federation in Next.js 14 offers several significant advantages: - **Improved Code Reuse:** It allows for the sharing of common code across multiple projects or components, reducing redundancy and improving maintainability. - **Better Scalability:** By breaking down the front-end architecture into smaller, manageable parts, Module Federation enables the scaling of applications with greater ease and flexibility. - **Simplified Deployment Process:** Changes in shared modules or components can be deployed independently, without the need for redeploying the entire application, leading to more efficient development workflows. ## Implementing Module Federation Practical implementation of Module Federation in Next.js 14 involves: - **Setting up Module Federation:** Configuring the Module Federation plugin in the Next.js project, defining shared modules, and specifying versioning and fallbacks. - **Real-World Use Cases:** Ideal for large-scale applications split across multiple teams, e-commerce platforms with different feature sets, or any scenario where different parts of the application can evolve independently. - **Best Practices:** Ensure version compatibility between shared modules, regularly update shared dependencies, and establish clear contracts for shared modules to avoid runtime issues. ## Conclusion Module Federation in Next.js 14 represents a leap forward in the field of web development. Its ability to streamline code sharing and facilitate the development of scalable, modular applications marks it as a crucial tool in the arsenal of modern web developers, especially for those dealing with complex application architectures. ## References For more comprehensive insights into Module Federation in Next.js 14, refer to the [official Next.js documentation](https://nextjs.org/docs) and explore a range of technical blogs and articles that delve deeper into its practical applications, challenges, and best practices.
lexyerresta
206,078
dssdf
A post by isaac calderon
3,281
2019-11-15T21:00:57
https://app.clickfunnels.com/for_domain/8746876545684864645465468468468hgygu.clickfunnels.com/webinar-registration-page33596330?updated_at=f4a0ace58b63c33c9ccdf1a153d3a210v2&track=0&preview=true
icm654987
1,724,996
How to insert a countdown screen in the video that is the same length as the video?
Step 1: Open the software and click where the arrow points to import the video clip. Step 2: Click...
0
2024-01-12T03:17:44
https://dev.to/shenmuxingjing/how-to-insert-a-countdown-screen-in-the-video-that-is-the-same-length-as-the-video-2nni
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/s4iu1b10q40esjk1nc3l.PNG) Step 1: Open the software and click where the arrow points to import the video clip. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lo4qw2xwzen5w5lyxfai.PNG) Step 2: Click the counter option pointed by the arrow and then click OK in the pop-up window. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dhpn1m5ro4tcfzp4gun3.PNG) Step 3: Insert a counter anywhere in your imported video window. In the process, you can freely adjust the font, size, color and other properties of the counter. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ox9o1hs8htd74eir34b.PNG) Step 4: With the counter selected, right-click on Properties. In the pop-up window, find the Counter Settings item and then toggle the Reverse Playback option to Yes. After finishing the settings, just click Play. In this way, you have successfully inserted a countdown timer in your video that is consistent with the duration of the video. Link: https://pan.quark.cn/s/bd3a93291d4c Extract code: Rwtg
shenmuxingjing
1,725,110
Exploring Next.js Plugins and Middleware
Introduction Next.js, a popular React framework, is known for its flexibility and...
0
2024-01-12T06:10:23
https://dev.to/lexyerresta/exploring-nextjs-plugins-and-middleware-4am3
webdev, javascript, react, nextjs
## Introduction Next.js, a popular React framework, is known for its flexibility and extensibility, partly due to its rich ecosystem of plugins and middleware. This article aims to explore these enhancements, highlighting how they can significantly extend the functionality and efficiency of Next.js applications. By understanding and utilizing these tools, developers can tailor their Next.js projects to meet diverse and complex requirements. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tsczx9tzhx340a5qd6sv.png) ## Overview of Next.js Plugins Plugins in Next.js serve as powerful extensions: - **Enhancing Functionality:** Plugins can add new features to Next.js applications or modify existing behaviors, providing additional capabilities beyond the framework's core offerings. - **Ease of Integration:** Plugins in Next.js can be integrated with minimal configuration, enabling developers to quickly and efficiently add new functionalities to their projects. ## Understanding Middleware in Next.js Middleware in Next.js plays a crucial role in enhancing application behavior: - **Server-Side Logic and Routing Enhancements:** Middleware allows for the execution of custom server-side logic before a request is completed. This can include modifying requests, handling authentication, or performing redirects. - **Customization of Application Flow:** Middleware offers a way to customize the flow of requests in the application, enabling more control over how responses are generated and delivered. ## Popular Plugins and Middleware for Next.js Some popular plugins and middleware include: - **SEO Optimization Plugins:** Enhancing search engine visibility with improved meta tags and structured data. - **Performance Monitoring Tools:** Middleware that helps in monitoring and analyzing the performance of Next.js applications. - **Authentication Middleware:** Simplifying the implementation of authentication mechanisms across the application. ## Implementing Plugins and Middleware To implement these tools in Next.js: - **Installation and Configuration:** Most plugins and middleware can be easily installed via npm or yarn and configured within the Next.js project settings. - **Practical Examples:** Include examples of integrating an SEO optimization plugin or setting up middleware for user authentication. - **Best Practices:** Recommendations on structuring your code and organizing middleware for optimal performance and maintainability. ## Case Studies: Successful Use of Plugins and Middleware Real-world applications demonstrate the effectiveness of these tools: - **E-commerce Platforms:** Utilizing SEO and performance plugins to enhance user experience and search engine rankings. - **Corporate Websites:** Implementing custom middleware for handling user authentication and data security. ## Conclusion Plugins and middleware are essential components in the Next.js ecosystem, offering developers the tools to build more robust, efficient, and tailored applications. Their versatility and power are key to unlocking the full potential of Next.js in diverse web development projects. ## References For comprehensive information, the [official Next.js documentation](https://nextjs.org/docs) is a valuable resource. Additionally, exploring community-driven resources and plugin libraries can provide practical insights into the effective use of these tools in Next.js.
lexyerresta
1,725,129
What Are Top Benefits of Automating Project Management Processes?
Time is of the essence in project management. No wonder project managers are constantly looking for...
0
2024-01-12T06:36:11
https://dev.to/rafikke_lion/what-are-top-benefits-of-automating-project-management-processes-2c8
automation
Time is of the essence in project management. No wonder project managers are constantly looking for ways to optimize their workflow and increase productivity. One solution that has gained popularity in recent years is the use of [automation through project management tools](https://monday.com/blog/project-management/automation-software/?utm_source=devto&utm_medium=organic&utm_campaign=top_benefits_of_automating_project_management_processes&utm_term=queueVTA_link_1). By automating repetitive tasks, project managers can free up valuable time and focus on more important work. **One of the key benefits of using automation is its ability to increase repeatability and predictability.** Machines follow predefined processes, reducing human error and leading to consistent results. This improves efficiency and ensures that projects stay on track and deadlines are met. **I will illustrate my point with an example. I have been a project manager long enough to remember the time when offices used to be surrounded by whiteboards and Post-it notes.** I would try to keep track of so many conversations and tasks at a time that it was quite stressful to communicate with multiple team members. You can picture me holding a phone in one hand and a notepad in the other amid the chaos and effort involved in manual communication. Now thanks to automation, I am at my desk with a laptop open, displaying the [monday dev](https://monday.com/dev/?utm_source=devto&utm_medium=organic&utm_campaign=top_benefits_of_automating_project_management_processes&utm_term=queueVTA_link_2) interface (software developed by monday.com for dev teams). **All I need to do is set up an automated workflow with a trigger.** For instance, "When a task status changes to 'Complete', notify the team member responsible." ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/tfgq8d81526eansgg4as.png) This workflow will run indefinitely and without fail unless I disable it. No room for any miscommunication. Amazing, right? **Things like this free up so much mental bandwidth, because you know the things that need to be taken care of being taken care of**. You don't have to check in on them. You don't have to sync on things make sure all the tasks were done. You don't have to feel like you're micromanaging employees. Things just happen and it's amazing. **I can't emphasise enough how incredible it is, to just be able to tell AI the kind of automation you wanted to create and see it being created in front of your eyes.** By streamlining projects through automation, teams can produce more output without adding additional resources. Automation allows for better resource allocation and optimization, resulting in increased productivity. **But when should you consider implementing automation?** I have already given you an example of how automation can transform communication and collaboration. There are two other key areas where automations can be particularly beneficial: - **Monitoring key project aspects:** Automation can be used to monitor critical paths, deadlines, overloaded resources, and delays. By automatically tracking these factors, project managers can quickly identify potential issues or bottlenecks before they become major problems. - **Automating manual tasks:** If there are tasks within your project management process that are manual, time-consuming, well-documented, and consistently repeated – they could likely benefit from automation. These tasks often include work intake process flows or creating new projects and tasks. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5lxt598k2hqzzv2r1ntc.png) If you automate processes in these areas, you will be able to reduce mundane administrative tasks which will free you for more critical aspects. We also use [monday dev integration](https://monday.com/integrations/?utm_source=devto&utm_medium=organic&utm_campaign=top_benefits_of_automating_project_management_processes&utm_term=queueVTA_link_3) with various other apps such as Gmail, MS Teams, Slack, Zoom, and GitHub for seamless information flow between tools. Additionally, automation maintains consistent quality throughout the project lifecycle, minimizing errors due to human oversight or fatigue, and ensuring precision and accuracy in execution. **Below are some specific automated workflow processes I have set up using [monday dev](https://monday.com/dev/?utm_source=devto&utm_medium=organic&utm_campaign=top_benefits_of_automating_project_management_processes&utm_term=queueVTA_link_4):** - **Status change notifications**: Automate notifications when the status of a task changes. For instance, when a bug is marked as resolved, relevant team members or stakeholders can be automatically notified. - **Trigger-based task management:** Set up triggers based on specific criteria, such as "When a task is marked as complete, move it to the 'Done' column." This helps in efficient project tracking without manual intervention. - **Time-based reminders:** Automations can remind teams of approaching deadlines or trigger specific actions based on time, such as sending a reminder a day before a task is due. - **Dependency management:** Automatically update or notify team members when dependent tasks are completed. This ensures seamless progress in projects with interconnected tasks. - **Automated reporting:** Generate and distribute reports automatically at set intervals, providing stakeholders with regular updates on project progress. - **Resource allocation:** Automate the assignment of new tasks based on predefined conditions, such as workload balance or expertise area. - **Quality assurance checks:** Integrate automated quality checks that trigger when a piece of code is committed or updated, ensuring immediate feedback on the code's integrity. - **Cross-board automation:** For projects spanning multiple boards or teams, automate the movement or synchronization of tasks across these boards to maintain consistency. - **Automated documentation:** Trigger the creation or updating of documentation when certain milestones are reached in the development process. - **Integration with development tools:** Link your project management tool with code repositories or CI/CD pipelines to automate aspects like branch creation, pull request management, or deployment processes. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e6fpxkts5950u0ibftmy.png) In summary, **incorporating automation into your project management software offers numerous benefits for both individual users and organizations as a whole.** It frees up valuable time by handling repetitive tasks, increases repeatability through predefined processes, helps achieve more output with fewer resources, and monitors key aspects of projects efficiently. If you're just starting out with automation in project management, it's best to begin by automating basic tasks before gradually expanding its implementation across your projects. Start small but think big – even automating a few simple tasks can have a significant impact on productivity and efficiency. **What would be the tasks that you would automate if you could that will give you back hours each week? Let me know in the comments.**
rafikke_lion
1,725,333
Hello, dev.to Community! 🚀
Hey everyone! I just joined Dev.to. This post is to say hello to everyone. I'm here writing under...
0
2024-01-12T10:34:04
https://dev.to/1geek/hello-devto-community-9j
welcome, welcomenote
Hey **everyone**! I just joined Dev.to. This post is to say hello to everyone. I'm here writing under the name @1geek . I am a developer, excited about AI, development, game development, and tech. 🌐 Let's build something great together! Share your insights, ask questions, or just drop a comment to say hello. I can't wait to connect with all the amazing developers out there!
1geek
1,725,368
🌟 5 secret TypeScript repos the top 1% of devs LOVE 🔥
Hi there 👋 For this week's analysis, we found 5 TypeScript repos adored by the top 1% of...
0
2024-01-12T11:25:07
https://dev.to/quira/5-secret-typescript-repos-the-top-1-of-devs-love-38eh
typescript, javascript, webdev, programming
Hi there 👋 For this week's analysis, we found **5 TypeScript repos adored by the top 1% of developers**. Ready to check them out? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dydkv17feuj2opejj1yg.gif) --- # How do we identify the "top 1%" of devs? 🔎 At Quira, we rank developers based on their **[DevRank](https://docs.quira.sh/for-developers/devrank)**. In simple terms, DevRank uses [Google’s PageRank algorithm](https://en.wikipedia.org/wiki/PageRank) to measure **how important a developer is in open source based on their contributions to open source repos.** After finding the repos that the top 1% have starred, we calculate the likelihood that these top devs will star a repo compared to the likelihood that the bottom 50% won’t. 📊👇 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/uf6vtxtx9dtdeh13kvgn.png) _Note: We recognise that our [ranking method](https://docs.quira.sh/for-developers/devrank) is not yet perfect, so we keep improving our model. We welcome any feedback on this._ 🙏 --- The below repos will be particularly useful when you want to build your own projects. If you want to build stuff, have fun and make money from it, the latest _Creator Quest_ challenges you to build developer tools using GenAI (**the current prize pool is $2048🤫)**. To participate, sign up to [Quira](https://quira.sh/?utm_source=devto&utm_campaign=typescript_repos1). [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/akiuhk62zctvf3b9gilx.png)](https://quira.sh/?utm_source=devto&utm_campaign=typescript_repos1) Now that we've explored our methodology let's dive into 5 fantastic TypeScript repositories that can take your work to the next level :rocket: --- # 🧩 amilajack/eslint-plugin-compat **A tool to check your code's browser compatibility** [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6mga2x25advq58izvq2o.gif)](https://github.com/amilajack/eslint-plugin-compat) **Why should you care?** Eslint-plugin-compat ensures that your code is compatible with your target browsers. This tool examines your JavaScript code to flag features that may not work in the browser environment. It is useful for avoiding browser-specific issues and providing a consistent user experience across different websites. **Set up**: `npm install eslint-plugin-compat` **Example Use Case:** ``` # 1. Update ESLint Config in .eslintrc.json: { "plugins": ["compat"], "extends": ["plugin:compat/recommended"], "env": { "browser": true } // ... } # 2. Configure Target Browsers in your package.json: { // ... "browserslist": ["defaults"] } ``` [https://github.com/amilajack/eslint-plugin-compat](https://github.com/amilajack/eslint-plugin-compat) --- # 🔦 g-plane/typed-query-selector **Better typed `querySelector` and `querySelectorAll`** [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wg266k949gz2oggcaypn.png)](https://github.com/g-plane/typed-query-selector) **Why should you care?** Typed-query-selector improves the standard querySelector and querySelectorAll functions by providing better typing using TypeScript's template literal types. This means you'll get much more precision for DOM elements, making your TypeScript code safer and easier to use; especially when dealing with complex selectors or actions directly with DOM elements in type-safe mode. **Set up**: `npm i -D typed-query-selector` **Example use case**: ```typescript import 'typed-query-selector' document.querySelector('div#app') // ==> HTMLDivElement document.querySelector('div#app > form#login') // ==> HTMLFormElement document.querySelectorAll('span.badge') // ==> NodeListOf<HTMLSpanElement> anElement.querySelector('button#submit') // ==> HTMLButtonElement ``` [https://github.com/g-plane/typed-query-selector](https://github.com/g-plane/typed-query-selector) --- # 🔗 jeffijoe/typesync **Installs your dependencies' missing TypeScript typings** [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zpiivuimebuuv1jv0tw1.gif)](https://github.com/jeffijoe/typesync) **Why should you care?** TypeSync automatically installs TypeScript type definitions for all dependencies in the project. The tool scans your package.json file and automatically adds the appropriate _@types/package_, saving you the trouble of adding them manually. It really saves time and ensures that your project's type-checking is correct and compatible with your dependencies. **Set up**: `npm install -g typesync` **Example use case**: ``` typesync [path/to/package.json] [--dry] # Path is relative to the current working directory. If omitted, defaults to package.json. # If --dry is specified, will not actually write to the file, it only prints added/removed typings. ``` [https://github.com/jeffijoe/typesync](https://github.com/jeffijoe/typesync) --- # 👯‍♀️ scinos/yarn-deduplicate **Deduplicate your yarn.lock files** [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/y0pgqpmokt9bt2bg00qy.png)](https://github.com/scinos/yarn-deduplicate) **Why should you care?** It helps clean up project dependencies. It makes your project lighter and potentially faster by removing duplicate packages from your _yarn.lock file_. This tool is handy if you're using Yarn v1, as it doesn't support native package deduplication like Yarn v2 does. **Set up**: `npm install -g yarn-deduplicate` OR `yarn global add yarn-deduplicate` **Example use case**: Simply run ```typescript yarn-deduplicate yarn.lock ``` [https://github.com/scinos/yarn-deduplicate](https://github.com/scinos/yarn-deduplicate) --- # ⭕️ discord/focus-rings **Helping you display focus indicators anywhere on a webpage.** [![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/pxzmk3a3168apq7zq9x5.gif)](https://github.com/discord/focus-rings) **Why should you care?** Focus indicators are visual cues that highlight which element on a webpage is currently selected. React-focus-rings is a tool for creating a consistent and good-looking visual focus in web applications. This makes it easy to use focus rings to ensure your website is efficient and accessible to all users, including keyboard navigation users. **Set up**: `npm i react-focus-rings` **Example use case**: ```typescript import * as React from "react"; import ReactDOM from "react-dom"; import { FocusRing, FocusRingScope } from "react-focus-rings"; import "react-focus-rings/src/styles.css"; function App() { const containerRef = React.useRef<HTMLDivElement>(null); return ( <div className="app-container" ref={containerRef}> <FocusRingScope containerRef={containerRef}> <div className="content"> <p>Here's a paragraph with some text.</p> <FocusRing offset={-2}> <button onClick={console.log}>Click Me</button> </FocusRing> <p>Here's another paragraph with more text.</p> </div> </FocusRingScope> </div> ); } ReactDOM.render(<App />, document.body); ``` [https://github.com/discord/focus-rings](https://github.com/discord/focus-rings) --- **I hope these discoveries are of value to you and will help you build a more robust Typescript toolkit! ⚒️** If you want to leverage these tools to build cool projects and earn rewards, log into [Quira](https://quira.sh/?utm_source=devto&utm_campaign=typescript_repos1) and discover Quests! 💰 It's time to code, have fun and bag some awesome rewards. 🤘 ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/exfce2dvwogd24ylsjhf.gif) PS: **Please consider supporting these projects by starring them. ⭐️** We are not affiliated with them. We just think that great projects deserve great recognition. See you next week, Your Dev.to buddy 💚 Bap --- If you want to join the self-proclaimed "coolest" server in open source 😝, you should join our [discord server](https://discord.com/invite/ChAuP3SC5H/?utm_source=devto&utm_campaign=typescript_repos1). We are here to help you on your journey in open source. 🫶 {% embed [https://dev.to/quira](https://dev.to/quira) %}
fernandezbaptiste
1,725,386
Easy Guide to Creating Smart Chatbots with Langchain & GPT-4
Introduction Langchain is a dynamic Python library revolutionizing natural language...
0
2024-01-12T12:17:50
https://dev.to/zanepearton/easy-guide-to-creating-smart-chatbots-with-langchain-gpt-4-i5c
webdev, tutorial, ai, python
###Introduction Langchain is a dynamic Python library revolutionizing natural language processing, text embedding, document indexing, and information retrieval. Seamlessly integrated with OpenAI's GPT-4, it provides developers with a powerful toolkit. This guide explores Langchain's conversational retrieval model implementation using GPT-4, emphasizing its features, setup, and practical usage. ### What is Langchain? LangChain is a Python library designed for natural language processing (NLP), text embedding, document indexing, and information retrieval. It’s particularly notable for its ability to integrate with advanced language models like OpenAI’s GPT-4, allowing developers to leverage these models for a variety of tasks. - Access Langchain's repository at [Langchain's Repository](https://github.com/langchain-ai/langchain). ### Key Features Langchain is distinguished by its versatile functionalities: - **Data Loading**: Effortless loading from directories or text files for diverse data sourcing. - **Indexing Text Documents**: Enables quick and efficient text document indexing. - **Persistent Storage**: Enhances repetitive query performance with persistent data storage. - **GPT-4 Integration**: Elevates natural language understanding with seamless GPT-4 integration. - **Conversational Interface**: User-friendly, console-based chat interface for interactive communication. ### Exploring Langchain & GPT-4 API - Example I created on GitHub: [Langchaingpt Example](https://github.com/ZanePearton/langchaingpt). ###Lets break Down the Code I have created the following python script. Let’s break it down #### System Requirements and Installation - Requires Python 3.7+, OpenAI Python SDK, langchain library, and Constants library. - Install with ease by cloning the repository and installing required packages: ```bash git clone https://github.com/ZanePearton/langchaingpt.git cd Langchaingpt pip install langchain pip install constants ``` #### File Structure Understanding the file structure is vital: ``` Langchaingpt/ ├── data/ # Text data files ├── main.py # Main application script └── constants.py # Stores the OpenAI API Key ``` #### Usage and Application `Langchaingpt.py` reads and indexes text documents, offering a console-based chat interface. It uses OpenAI's GPT-4 for interactive user query responses. Start with an optional query or interact during the conversation. Exit with 'quit', 'q', or 'exit'. ```bash python main.py "query data" # or python main.py ``` #### Essential Libraries Key libraries include `openai` and `langchain` modules like `ConversationalRetrievalChain`, `RetrievalQA`, and `ChatOpenAI`. ```python import openai from langchain.chains import ConversationalRetrievalChain, RetrievalQA from langchain.chat_models import ChatOpenAI ``` #### Configuration and Setup Set the OpenAI API key in `constants.py` and check for command-line arguments to set initial queries. ```python import os import sys import constants os.environ["OPENAI_API_KEY"] = constants.APIKEY query = sys.argv[1] if len(sys.argv) > 1 else None ``` #### Indexing and Persistence Create and reuse indexes for efficiency. Use the `PERSIST` flag to control data persistence. ```python PERSIST = False # Code for reusing or creating new indexes ``` #### Conversational Retrieval Chain Initialization Initialize `ConversationalRetrievalChain` with GPT-4 model for efficient information retrieval during conversations. ```python chain = ConversationalRetrievalChain.from_llm( llm=ChatOpenAI(model="gpt-4"), # Additional parameters ) ``` #### Interactive Conversational Loop Engage users with an interactive loop, prompting for queries and generating responses using `ConversationalRetrievalChain`. ```python chat_history = [] while True: query = input("Prompt: ") # Query processing and response generation ``` #### Exit Strategy Implement a user-friendly exit strategy with commands like 'quit', 'q', or 'exit'. ```python if query in ['quit', 'q', 'exit']: sys.exit() ``` #### Setting Up the OpenAI API Key Ensure the OpenAI API Key is correctly set in `constants.py` for the application to function. ```python APIKEY = "your-openai-api-key" ``` Replace 'your-openai-api-key' with your actual key, keeping it secure and confidential. #### Advantages of Langchain Langchain, integrated with GPT-4, offers: 1. **Enhanced Data Processing**: Efficiently processes and indexes large text data volumes. 2. **Scalability**: Adaptable to various project sizes. 3. **Ease of Use**: User-friendly interface for broader accessibility. 4. **Customizability**: Flexibility for specific project needs. #### Applications Langchain's capabilities extend to: - Automated customer support. - Data analysis and research tools. - Interactive educational platforms. - Content management systems with efficient data retrieval. #### Conclusion Langchain, combined with OpenAI's GPT-4, marks a significant advancement in conversational AI and data retrieval. Its user-friendly nature and robust features make it an invaluable tool for developers crafting sophisticated conversational models and data processing applications.😎
zanepearton
1,725,585
Dissolving with Dignity: Compassionate and Strategic Divorce Guidance in New Jersey
In the challenging landscape of divorce, finding a supportive and strategic guide is essential for...
0
2024-01-12T15:14:20
https://dev.to/morrisoelliott/dissolving-with-dignity-compassionate-and-strategic-divorce-guidance-in-new-jersey-3j7j
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a103ymzcoherim6cs6cn.png) In the challenging landscape of divorce, finding a supportive and strategic guide is essential for navigating the complex legal terrain. If you're contemplating divorce in New Jersey, you're not alone in seeking compassionate and professional assistance during this difficult time. In this article, we explore the crucial aspects of divorce in New Jersey, the role of divorce lawyers, and the importance of a dignified dissolution process. **Understanding Divorce in New Jersey** Divorce is a significant life event that requires careful consideration, especially in a state like New Jersey. The Garden State has its own set of laws and regulations governing divorce proceedings, making it imperative to understand the specifics of the process. From asset division to child custody arrangements, being well-versed in New Jersey's divorce laws is crucial for a smooth dissolution. **The Role of Divorce Lawyers** In the realm of divorce, legal guidance is paramount. Knowledgeable and experienced divorce lawyers play a pivotal role in ensuring that your rights are protected and that the divorce proceedings adhere to New Jersey's legal framework. Whether it's negotiating a fair settlement, handling complex paperwork, or representing you in court, a skilled NJ divorce lawyer is an invaluable asset during this challenging time. **Compassionate Support Throughout** Divorce is more than a legal process; it's a deeply emotional journey. Recognizing the need for compassionate support, reputable divorce lawyers in New Jersey prioritize empathy and understanding. They guide clients through the emotional intricacies of divorce while providing strategic legal counsel to achieve the best possible outcomes. **Strategic Divorce Guidance** Strategic planning is key to a successful divorce, and this is where a seasoned NJ divorce lawyer comes into play. From assessing the financial implications to formulating a comprehensive legal strategy, your lawyer should be adept at navigating the complexities of New Jersey's divorce laws. This strategic approach ensures a fair and equitable resolution tailored to your unique circumstances. **Conclusion** Dissolving with dignity is not just a catchphrase but a goal that can be achieved with the right guidance. If you're contemplating [divorce New Jersey](https://srislawyer.com/divorce-in-new-jersey), enlist the support of compassionate and strategic divorce lawyers who understand the nuances of divorce in the Garden State. By prioritizing empathy and employing a strategic approach, you can navigate the divorce process with dignity and emerge with a favorable resolution. Remember, you don't have to face this journey alone; seek the guidance you need to move forward confidently.
morrisoelliott
1,725,817
Pare de culpar e comece a resolver
Para de culpar e comece a resolver... Culpar a si, culpar os outros, justificar cada erro de maneira...
0
2024-01-12T18:48:04
https://dev.to/lincolixavier/para-de-culpar-e-comece-a-resolver-bgm
career, softskills, discuss, softwareengineering
_Para de culpar e comece a resolver..._ _Culpar a si, culpar os outros, justificar cada erro de maneira extensa é tiro certeiro pro fracasso tanto pra sua carreira quanto pro seu time atual._ A resolução de problemas é uma habilidade essencial para se destacar no ambiente de trabalho. Um profissional de sucesso não foge dos desafios, mas busca maneiras de superá-los. Desenvolver a capacidade de solucionar problemas é fundamental para se adaptar às constantes transformações do mundo do trabalho. **A mudança mental como primeiro passo** Para aplicar a resolução de problemas de forma efetiva, é necessário uma mudança mental. É preciso encarar os obstáculos como passageiros e adotar uma visão otimista. A capacidade de solucionar problemas está diretamente ligada ao desenvolvimento das soft skills, habilidades comportamentais que são a base para lidar com desafios diários. **A relação entre resolução de problemas e inteligência emocional** Desenvolver a capacidade de solucionar problemas sem desenvolver a inteligência emocional pode ser prejudicial. A disciplina e o otimismo necessários para vencer os desafios do trabalho são originados dessa habilidade. Reclamar e evitar problemas demonstra falta de preparo para lidar com adversidades, o que pode impactar negativamente o crescimento profissional. Muitas vezes, o maior erro ao lidar com problemas é tentar encontrar uma solução imediata sem analisar a origem do problema. O pensamento crítico é essencial nesse processo. É preciso questionar e investigar a causa raiz do desafio, fazer as perguntas certas e aprofundar-se nas suas origens. Somente após compreender o problema em sua totalidade é que se deve buscar soluções. **Ferramentas para auxiliar na resolução de problemas** Existem diversas ferramentas que podem auxiliar na resolução de problemas no ambiente de trabalho. O brainstorming, por exemplo, é uma técnica em que os participantes apresentam várias ideias para solucionar um desafio. O Diagrama de Ishikawa é outra ferramenta útil para encontrar a causa raiz de um problema, por meio de perguntas que orientam a análise. O Design Sprint, criado pelo Google, é um método para testar e aplicar novas ideias. **Estimulando a capacidade de resolver problemas** Manter-se motivado diante dos desafios é essencial para desenvolver a capacidade de resolver problemas. É importante lembrar que a resolução de problemas demanda resiliência e tempo. A mentalidade questionadora é fundamental para estimular a busca por soluções. É preciso conviver com os problemas e não fugir deles, pois é assim que se cresce e aprende. **Os maiores erros na resolução de problemas** A ansiedade é um dos maiores erros que podem ser cometidos ao lidar com problemas. Tentar resolver um desafio de forma rápida e superficial pode prejudicar o pensamento crítico e levar a soluções inadequadas. Além disso, o medo de propor ideias e a falta de iniciativa são posturas que podem impedir o crescimento profissional. É facil falar pra não ter ansiedade, as vezes pressão é bem grande diante de certos problemas, especialmente no contexto de desenvolvimento de software, mas ainda assim o esforço pra cultivar a atitude correta é válido. Reconhecer a existência dos problemas e estar atento aos detalhes são os primeiros passos para desenvolver essa mentalidade. Fazer perguntas, analisar o problema sob diferentes perspectivas, buscar apoio e implementar soluções são práticas que fortalecem a capacidade de resolver problemas. Pare de culpar ou se justificar por cada deslize ou tropeço é vai fazer toda diferença no seu dia-dia e seu time vai notar isso também, a famigerada "proatividade" não sobre se matar de trabalhar e querer aparecer pros seus superiores, ser mais eficiente no trabalho eventualmente te fará trabalhar MENOS. _**Pare de culpar e comece a resolver**_ Obrigado por ler até aqui \o/ ✨ Conheça a Comunidade Nomadz ✨ 👉🏻 https://www.patreon.com/nomadz/membership Quer falar comigo? Tô por aqui: https://instagram.com/lincoli.xavier https://www.tiktok.com/@lincoli.xavier https://twitter.com/lincolixavier https://www.lincolixavier.com/
lincolixavier
1,725,889
virapanel
I want to introduce a site in the field of sandwich panels ساندویچ پانل سقفی ساندویچ پانل...
0
2024-01-12T21:10:22
https://dev.to/virapanel/virapanel-h64
I want to introduce a site in the field of sandwich panels [ساندویچ پانل سقفی ](https://virapanel.com/sandwich-panel-ceiling/) [ساندویچ پانل دیواری ](https://virapanel.com/what-is-a-wall-panel-sandwich/) [قیمت ساندویچ پانل ](https://virapanel.com/sandwich-panel-price/) [ساندویچ پانل](https://virapanel.com/)
virapanel
1,725,922
What am I?
What am I I am a researcher at heart. Nobody pays me to do research, though. Yet I don’t...
0
2024-01-13T01:27:54
https://siran.github.io/writing/2023/12/28/what-am-i.html
--- title: What am I? published: true date: 2023-12-28 00:00:00 UTC tags: canonical_url: https://siran.github.io/writing/2023/12/28/what-am-i.html --- # What am I I am a researcher at heart. Nobody pays me to do research, though. Yet I don’t think I want to do anything else. Even improvising music is a type of inner research where I am not looking for any answers and yet I am exploring my being. I’ll just continue for now and see where these ideas I’ve been having lately all lead to. The ideas are based on an experiment I did with two “colleagues” of mine. One being an ex business partner of my father. The other being one of my college teachers. The experiment is [published on Reasearch Gate](https://www.researchgate.net/publication/369529273_Daily_variations_of_the_amplitude_of_the_fringe_shifts_observed_when_an_air-glass_Mach-Zehnder_type_interferometer_is_rotated_-_a_preliminary_report#fullTextFileContent) . If we did everything right it should right the wrong of Michelson and Morley conclusions of their famous experiment. Says Wikipedia, but most importantly it stands to reasons, that « …of this experiment, Albert Einstein wrote, “If the Michelson–Morley experiment had not brought us into serious embarrassment, no one would have regarded the relativity theory as a (halfway) redemption.”» [^1] More details about this experiment in the near future. Also people ask me a lot about the implication of the results. I also intent to write more about this in the near future. For now, I am thinking ways to replicate the results with a completely different technique: if one points a laser to the North direction, it would take `0.3ns` for light to propagate `1m`; since the Earth translates around the sun a simple calculation expect to measure about `0.1mm` ![spreasheet with numbers](https://i.imgur.com/flZrgff.png)_Spreadsheet showing expected displacement of Earth during 3ns_ Do you spot any wrong assumptions? [^1] : Albrecht Fölsing (1998). Albert Einstein: A Biography. Penguin Group. ISBN 0-14-023719-4. (https://en.wikipedia.org/wiki/Michelson%E2%80%93Morley\_experiment#cite\_note-Folsing-4)
anrodriguez
1,725,991
Categorizing Tasks into Five Groups (Bite-size Article)
Introduction I've previously written articles on task management, but as we step into the...
0
2024-01-12T23:28:23
https://dev.to/koshirok096/categorizing-tasks-into-five-groups-bite-size-article-26o3
productivity
#Introduction I've [previously written articles](https://dev.to/koshirok096/how-i-make-daily-task-list-with-logseq-126p) on task management, but as we step into the new year, I'm revisiting my approach to make it more productive and comfortable. While it's not set in stone yet, at this moment, I'm considering refining my task management using Notion's task lists. Specifically, the plan involves creating a Task List database in Notion to schedule and manage my to-dos effectively. In this short article, I'll be discussing this aspect of task management. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/c14wdo7w3mlawpd658n6.png) #Categorizing Tasks into Five Groups In my case, tasks can be categorized into the following five types: ##Project Tasks Tasks that resemble projects, requiring multiple actions to complete. For instance, delivering a Landing Page to a client involves actions like Meeting, Design, Coding, and Delivery. The "Project Tasks" category represents the overarching goal, such as "Create and deliver a Landing Page to the client", serving as the task title. ##Sub Tasks Sub Tasks consist of the child tasks that make up the above-mentioned "Project Tasks". Notion has its own function called **Sub-item**, which can also be used to organise them. ##Single Tasks Single Tasks are standalone actions that do not fall under mid to long-term projects. Examples include "Go grocery shopping", "Do laundry", or "Attend a class reunion", etc. ##Learning This category is reserved for tasks related to learning. Whether it's a short-term study session or a long-term learning plan, the execution and management methods align with regular tasks. While you can consolidate learning tasks under Single Tasks, I've opted for a separate category to emphasize the non-urgent nature of learning and the potential differentiation from daily tasks. ##Routine This includes tasks like weekly meetings or routine household chores. With Notion's recent implementation of **the Repeat function** on database, managing recurring tasks has become more convenient. --- By templating these five categories in the Task List database and utilizing **the Select property** in db for each, you can distinguish and efficiently register tasks in the database. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h6mmnnmp45xl4wo9910q.png) #View How tasks are visualized is crucial, and Notion offers a feature called **View** to customize the appearance of your database. I'll be utilizing this feature to create a Task List that suits my preferences. Below are the views I've come up with: ##Daily List Set as a Table View with a filter to display tasks with the scheduled date set to "**Today**". This view allows me to see a list of tasks for the current day. ##Weekly List Also in Table View, with a filter showing tasks scheduled for "**This Week**". This view is handy for an overview of tasks for the week. ##Weekly and Monthly Cal In Calendar View, displaying tasks scheduled for either a weekly or monthly range. ##Project Timeline Using the Timeline View and filtering to show only "Project Tasks". This view provides an **overview of the mid to long-term projects I'm working on**. ## All Tasks Set as a Table View without any filters, displaying all tasks in the list. #Conclusion While still in the trial-and-error phase, I've summarized the current operational approach for task management. I'm considering adopting this strategy for task management throughout the year. Previously, I wrote an [article](https://dev.to/koshirok096/how-i-make-daily-task-list-with-logseq-126p) about daily lists in Logseq, but I'm contemplating a different usage approach. If the opportunity arises, I'll be sure to introduce it in a blog post. Thank you for reading!
koshirok096
1,726,001
Diving Into Python's Lambda Functions
Introduction Lambda functions, also known as anonymous functions, are a powerful tool in...
0
2024-01-13T00:30:48
https://dev.to/kartikmehta8/diving-into-pythons-lambda-functions-2fp2
python, beginners, programming, tutorial
## Introduction Lambda functions, also known as anonymous functions, are a powerful tool in Python for creating small, one-line functions without a formal name. In this article, we will dive into the world of lambda functions in Python and explore their advantages, disadvantages, and features. ## Advantages One of the main advantages of using lambda functions is their ability to save space and improve readability of code. As they are one-line expressions, they take up less space in the code and make it more concise and easy to understand. Additionally, lambda functions can be passed as arguments to other functions, making them useful for callback functions and event handling. ## Disadvantages The main disadvantage of using lambda functions is their limited functionality. They can only contain a single expression and cannot handle complex operations. This makes them unsuitable for larger, more complicated programs. Another downside is the lack of a formal name, making it difficult to debug or test the function individually. ## Features Lambda functions can also be used as part of list comprehensions, making it easier to create and manipulate lists in Python. They also have access to variables in the enclosing scope, providing more flexibility in their usage. Additionally, they are faster than regular functions as they do not require the overhead of creating a named function object. ### Example: Using Lambda with Filter Here's an example of using a lambda function with the `filter` function to extract even numbers from a list: ```python numbers = [1, 2, 3, 4, 5, 6, 7, 8, 9, 10] even_numbers = list(filter(lambda x: x % 2 == 0, numbers)) print(even_numbers) ``` This example showcases how lambda functions can be efficiently used to perform operations on lists. ## Conclusion Lambda functions are a useful tool in Python for creating small, one-line functions. They offer advantages in terms of space and readability, but also have limitations in their functionality. Despite their drawbacks, lambda functions are a valuable addition to any programmer's toolkit and can greatly improve the efficiency and readability of code.
kartikmehta8
1,726,111
Modules Status Update
New Year, New Challenges, New Possibilities As we step into the year 2024, the Puppet...
0
2024-01-13T18:28:55
https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-01-12-modules-status-update/
puppet, community
--- title: Modules Status Update published: true date: 2024-01-12 00:00:00 UTC tags: puppet, community canonical_url: https://puppetlabs.github.io/content-and-tooling-team/blog/updates/2024-01-12-modules-status-update/ --- ## New Year, New Challenges, New Possibilities As we step into the year 2024, the Puppet Modules team from Pune extends warm greetings to our vibrant community. We hope this year brings you success, growth and boundless opportunities. The past year was a journey filled with challenges and triumphs, and we want to express our gratitude for your continued support. Your feedback has been invaluable, shaping the evolution of Puppet Modules and driving us to deliver effective solutions for your configuration management needs. As we usher in 2024, we’re more committed than before to enhance the Puppet Modules experience, expanding our ecosystem and fostering even deeper collaboration within our community. So, please watch out the space for more information as we move through the year. ## Community Contributions We’d like to thank the following people in the Puppet Community for their contributions over this past week: - [`puppetlabs-apache#2518`](https://github.com/puppetlabs/puppetlabs-apache/pull/2518): “Correct handling of $serveraliases as string”, thanks to [ekohl](https://github.com/ekohl) and the following people who helped get it over the line ([smortex](https://github.com/smortex), [bastelfreak](https://github.com/bastelfreak)) - [`puppetlabs-apache#2515`](https://github.com/puppetlabs/puppetlabs-apache/pull/2515): “Fix use\_canonical\_name directive”, thanks to [pebtron](https://github.com/pebtron) - [`puppetlabs-apache#2514`](https://github.com/puppetlabs/puppetlabs-apache/pull/2514): “Fix extra newline at end of headers”, thanks to [smortex](https://github.com/smortex) and the following people who helped get it over the line ([vchepkov](https://github.com/vchepkov), [ekohl](https://github.com/ekohl)) - [`puppetlabs-apache#2506`](https://github.com/puppetlabs/puppetlabs-apache/pull/2506): “Cleanup .fixtures.yml”, thanks to [bastelfreak](https://github.com/bastelfreak) and the following people who helped get it over the line ([smortex](https://github.com/smortex)) - [`puppetlabs-apache#2505`](https://github.com/puppetlabs/puppetlabs-apache/pull/2505): “vhost\_spec: test if whole catalog compiles”, thanks to [bastelfreak](https://github.com/bastelfreak) and the following people who helped get it over the line ([smortex](https://github.com/smortex)) - [`puppetlabs-apache#2495`](https://github.com/puppetlabs/puppetlabs-apache/pull/2495): “Fix `mod_suexec` configuration”, thanks to [smortex](https://github.com/smortex) - [`puppetlabs-kubernetes#670`](https://github.com/puppetlabs/puppetlabs-kubernetes/pull/670): “Ensure correct scheduler extra arguments passed to v1beta3 template”, thanks to [treydock](https://github.com/treydock) and the following people who helped get it over the line ([deric](https://github.com/deric)) - [`puppetlabs-postgresql#1567`](https://github.com/puppetlabs/puppetlabs-postgresql/pull/1567): “Remove non-portable source commands”, thanks to [smortex](https://github.com/smortex) and the following people who helped get it over the line ([bastelfreak](https://github.com/bastelfreak)) - [`puppetlabs-postgresql#1566`](https://github.com/puppetlabs/puppetlabs-postgresql/pull/1566): “update GPG key”, thanks to [vaol](https://github.com/vaol) and the following people who helped get it over the line ([jonathannewman](https://github.com/jonathannewman), [ekohl](https://github.com/ekohl), [vchepkov](https://github.com/vchepkov), [donoghuc](https://github.com/donoghuc)) - [`puppetlabs-postgresql#1561`](https://github.com/puppetlabs/puppetlabs-postgresql/pull/1561): “support for a custom apt source release”, thanks to [h0tw1r3](https://github.com/h0tw1r3) ## New Module Releases - [`puppetlabs-apache`](https://github.com/puppetlabs/puppetlabs-apache) (`12.0.2`) - [`puppetlabs-postgresql`](https://github.com/puppetlabs/puppetlabs-postgresql) (`10.0.3`) - [`puppetlabs-tomcat`](https://github.com/puppetlabs/puppetlabs-tomcat) (`7.2.0`)
puppetdevx
1,726,124
New tool: sln-items-sync for Visual Studio solution folders
How and why I created sln-items-sync - a dotnet tool to generate SolutionItems from filesystem...
0
2024-01-13T23:19:28
https://timwise.co.uk/2024/01/13/new-tool-sln-items-sync-for-visual-studio-solution-folders/
--- title: New tool: sln-items-sync for Visual Studio solution folders published: true date: 2024-01-13 00:00:00 UTC tags: canonical_url: https://timwise.co.uk/2024/01/13/new-tool-sln-items-sync-for-visual-studio-solution-folders/ --- How and why I created `sln-items-sync` - a `dotnet tool` to generate SolutionItems from filesystem folders. If you want to skip the backstory head over: [https://github.com/timabell/sln-items-sync](https://github.com/timabell/sln-items-sync) ## 15 years of minor irritation Faced with another set of microservice repos written in dotnet-core, with `.sln` files in various states of tidiness I found my self for the 1000th time in 15+ years manually pointy clicky adding fake solution-items folders and subfolders and then toiling away adding files **just** so I could search them, click them and view them from within Visual Studio or Rider. There must be a better way by now I thought, so I went hunting. All I turned up was a lot of people asking the same thing and some dead tooling from years ago. Here’s the stackoverflow from 2008 with 90k views and 180 upvotes: [https://stackoverflow.com/questions/267200/visual-studio-solutions-folder-as-real-folders](https://stackoverflow.com/questions/267200/visual-studio-solutions-folder-as-real-folders), which didn’t really help in spite of having 23 answers. Not to mention the slew of linked questions where people are asking the same thing with different words. (Solution folders aren’t to be confused with adding files to a _project_ which used to be an equal nightmare before Microsoft saw sense and just included _what’s on the filesystem_. There are many old stackoverflow questions on that too from frustrated devs around the world.) ![screenshot of example solution items folder in Rider](https://timwise.co.uk/images/blog/sln-items.png) So with the programmer war cry of “how hard can it be, I’ll knock this out in a couple of evenings…” I set about on what turned out to be a significant exercise in yak-shaving in order to sort it out myself once and for all. > “How hard can it be, I’ll knock this out in a couple of evenings…” > > ~ Me. Again. ## What to build? I did briefly look at writing an IntelliJ (aka Rider) plugin but that turned out quickly to be a daunting thing so I put that idea down sharpish. I use Rider in preference to Visual Studio and VSCode for C# so didn’t even look at that side. VSCode didn’t even bother with .sln files last I checked. Next step was to write a CLI (command-line interface, aka terminal) tool to do it. (sln + filesystem in, mutated sln out, easy…) I have recently written command line tools in both GoLang and Rust, but given this is a tool that would only be useful to Microsoft developers I figured I’d do this one in C#. I do actually like C# as a language for all my interest in other things, and thanks to dotnet-core and Rider I can actually write the whole thing on Linux Mint Cinnamon where I like to be. ## Parsing and writing .sln files I then hunted around for any nuget packages that might do the grunt work of reading/writing the sln format. Surely after 20-something years there must be something, right? Well, kinda. The VS parsing code is locked away in some windows dll nastyness, probably in C++ and COM or something evil. It even predates XML as a format, never mind JSON. What I did find was the [SlnParser nuget package](https://www.nuget.org/packages/SlnParser), which someone had kindly written and open-sourced, and after quick test I could see it did a decent job of turning .sln files into an in-memory C# object model (a `Solution` class, with lists of things as properties). So major yak number one was to fork SlnParser and turn it into a two-way tool. This I did with a lot of hackery and created [SlnEditor nuget package](https://www.nuget.org/packages/SlnEditor/) which I published on nuget and github with the same Unlicense licensing as the original. Perhaps others will find this gift to the world useful in its own right. ## Creating sln-items-sync Finally with that working I was able to create the CLI tool I wanted, which I named [sln-items-sync](https://github.com/timabell/sln-items-sync). This was more work than I expected, but I got a first cut working reasonably quickly. ## Tests I put a good amount of effort into good end to end test coverage on both the parser and the tool itself because I am now a true believer that **without tests** you will be **unable to make future changes and dependency upgrades with speed and confidence**. I.e. lack of tests is the epitome of technical debt. In fact let me give that a block quote because it’s such an important point: > Lack of tests is the epitome of technical debt. (p.s. why is Epitome spelled that way but pronounced epitomy. English. Sigh.) This has paid off in spades as the amount of work to get it satisfactorily “done” grew and grew the closer I got to finished. The tests in both projects focus on “outside-in” testing rather than mockist unit testing. As such you can see at a glance the overall behaviour, spot any unexpected/unwanted output, and easily write new tests for new desired behaviour, being able to eyeball them easily for correctness. I won’t include one here as they are a bit lengthy, but you can go and look at the source repos on github. This is made a bit easier on this tool because the only interfaces to the world are: - a text format (easy to string-compare expected versus actual) - a filesystem (I went for creating real file trees in tests which worked well and gives even more confidence) - the command line interface (for the sync tool) - the API (for the parser lib) ## First contact with real .sln files As I am doing some work for a C# contracting client currently I was able to try it out on gnarly real solution files, with a view to submitting some small cleanup pull requests that could created really quickly. ### Stable ordering The first attempt was a complete failure because the generated patch re-wrote the entire sln file in a completely different order, resulting in a sea of red/green lines full of GUIDs and other cryptic changes in the git diff. While the solution items were updated as intended and could be seen in Rider etc., this was not a patch that could be submitted to the team, or that I would put my name to. Getting stable ordering between parsing and writing turned out to be a huge amount of work and refactoring, largely in the SlnEditor lib. The key to making stable-ordering work was to add an `int sourceLine` property to almost everything when parsing, and to sort by that before rendering back out again. This had the desired effect of keeping everything in the original order no matter how it was mutated, and new items are added to the end (by replacing default `0` with `Int.MaxValue` before sorting). Phew, another yak shaved, lost count now, but got more xmas hols so keep going….! ## Many bugs and gaps It surprised me a bit just how many [little niggles, edge cases, and small omissions](https://github.com/timabell/sln-items-sync/issues?q=is%3Aissue+is%3Aclosed) there were that had to be sorted out before I could use it to submit quality patches to client .sln files for real. Even the ever present byte-order-marker (BOM) was causing unwanted diffs because I hadn’t included it in the render, but .sln files seem to have them. Pleasingly I’ve resolved everything I came across, apart from making the parent/child guid mapping order stable which didn’t seem to be worth the effort seeing as they are completely incomprehensible anyway. ## Making a dotnet-tool Once it was working, it was only a couple of rounds of building and copying the exe to `bin/` before I got fed up with that approach to distribution. Amazingly it turns out to be pretty simple to build and publish tools to the `dotnet tool` ecosystem, they are actually just slightly special nuget packages, and you only have to add a couple of properties to the `.csproj` file. Making a dotnet-tool worked great, and is a great user experience for installing and running the tool. It even does updates for no extra effort! ## Github-actions To make both of these tools even easier to work on and maintain longer term, I wanted to have a good github action (aka CI) to build and run the tests. Build and test is trivial, you can pretty much click the default workflow button for .net in an empty github actions page and it just works. I wanted to also automate the nuget publishing of both from github-actions, as although I had a sh file to upload them from my machine that’s a faff and tends to stop working after a few machine rebuilds. Amazingly the author of SlnParser has taken an interest and provided a [PR that gave me a ready-made github-action to push to nuget](https://github.com/timabell/sln-items-sync/pull/15) for every release tag! So that’s now in place, and to release a new version I can just `git tag v1.2.3 && git push --tags` and github does the rest. ## The end, I need a lie down So after all that, I’m not sure it was all worth it, but it’s done and I’m justifying it as a holiday hobby project and a gift to the dotnet developers of the world. I will certainly enjoy it every time I find an out of sync `SolutionItems` folder in future and run my tool so that I can ship a patch for it in seconds flat. I also learned a few things and got kata-like practice on shipping quality things at speed. So with that, Merry Xmas and a happy new 2024. May all your solution folders be tidy and complete.
timabell
1,726,323
AI APIs: Feedback required
Hi there, We have released a bunch of APIs dealing with images and documents. I am really looking...
0
2024-01-13T11:52:39
https://dev.to/nikoldimit/ai-apis-feedback-required-g31
Hi there, We have released a bunch of APIs dealing with images and documents. I am really looking for people with experience in API development and testing to let me know if they find them useful (hopefully) and what use cases they can think of that would make these APIs shine. thanks! have a look here: https://apyhub.com/blog/change-log-10-01-24
nikoldimit
1,726,603
Review code
Review code là gì? Review code là một hoạt động quan trọng trong quá trình phát triển...
0
2024-01-28T03:42:06
https://anhtuank7c.dev/blog/review-code
reviewcode, softwaredevelopment, codequality, qualitycontrol
![Review code](https://anhtuank7c.dev/assets/images/review-code-fe2e64d0a4fdeb73c737b5f59c8d3558.webp) ## Review code là gì? Review code là một hoạt động quan trọng trong quá trình phát triển phần mềm, nơi các đồng nghiệp kiểm tra và đánh giá chéo mã code của nhau. ## Vì sao cần review code? Việc con người gây ra lỗi là điều không mong muốn nhưng cũng không hoàn toàn tránh được (Có nhiều yếu tố khiến con người gây ra lỗi như thể trạng sức khỏe, tâm lý, tiếng ồn, suy giảm sự tập chung, etc...). Vì vậy chúng ta cần review code để giảm thiểu rủi ro tiềm tàng cho hệ thống càng sớm càng tốt. Mục tiêu chính của việc review code bao gồm: - Tìm và sửa lỗi - Đảm bảo code tuân thủ các tiêu chuẩn, các nguyên tắc mà dự án đang áp dụng - Cung cấp cơ hội chia sẻ kiến thức giữa các thành viên - Gia tăng giao tiếp giúp các thành viên hiểu nhau hơn ## Các vấn đề chính khi review code Bạn cần tạo ra một danh sách (checklist) những câu hỏi và quy tắc được xác định trước mà dự án (và nhóm của bạn) sẽ tuân theo trong quá trình review. Danh sách này thường tập chung vào các vấn đề chính sau: - **Readability** (Khả năng đọc): Có bất kỳ đoạn comment dư thừa nào trong code không? - **Security** (Bảo mật): Đoạn mã có khả năng khiến hệ thống bị tấn công mạng không? - **Test coverage** (Độ bao phủ test): Có cần thêm các test case khác không? - **Architecture** (Kiến trúc): Mã có tính bao đóng, mô đun hóa để đạt được sự phân tách mối quan ngại không (separation of concerns)? Pattern áp dụng có chính xác không? - **Reusability** (Khả năng tái sử dụng): Mã có tái sử dụng các components, functions hay services có sẵn không? Bằng cách kiểm tra danh sách câu hỏi những vấn đề trong đoạn code sẽ được phơi bày một cách chi tiết và tường minh, không mang cảm tính. Việc sửa lỗi cũng trở nên rõ ràng và dễ hiểu hơn. ## Những lưu ý khi review code ### Để lại review chi tiết Khi bạn review code, đừng chỉ để lại những đánh giá chung chung hoặc chỉ gợi ý thay đổi, thay vào đó hãy giải thích vấn đề hiện tại là gì, vì sao nên thực hiện thay đổi. Vì sao cần giải thích chi tiết như vậy? - Đó là cách mà bạn biện minh cho nhận xét của mình, họ sẽ k cần hỏi lại bạn vì sao nên thực hiện những thay đổi mà bạn khuyến nghị. Điều này sẽ tiết kiệm thời gian cho bạn và cho người đánh giá. - Bạn review code cẩn thận hơn, không xuề xòa cho qua, nói có sách, mách có chứng. - Người được đánh giá code sẽ biết lý do cụ thể vì sao họ cần thực hiện thay đổi, điều này giúp họ giải quyết vấn đề tương tự trong tương lai. ### Giữ thái độ đúng mực, cầu tiến Việc review code suy cho cùng là khiến cho con tàu đi đúng hướng, mục tiêu dự án hoàn thành một cách đúng đắn. Mọi người cần giữ thái độ đúng mực, cầu tiến khi review cũng như khi nhận được review. Hãy loại bỏ yếu tố cảm xúc để tránh ảnh hưởng tới công việc. Nếu bạn viết code chưa được tốt, những feedback đáng giá sẽ giúp bạn viết code tốt hơn, điều này tốt cho bạn. ### Không review đoạn code dài hơn 200 dòng [Nghiên cứu tại Cisco Systems năm 2006](https://static1.smartbear.co/support/media/resources/cc/book/code-review-cisco-case-study.pdf) chỉ ra rằng khả năng tìm ra bug của developer giảm dần khi phải review nhiều hơn 200 dòng code. ### Mỗi commit chỉ nên chứa một ý nghĩa Những commit chứa quá nhiều code, nhiều thay đổi không liên quan tới nhau cũng làm giảm khả năng khôi phục mã code trong các nhánh (branch) đồng thời cũng làm khó người review. Ví dụ: bạn có một commit đảm nhiệm 02 thay đổi: format code, add thêm business logic Khi này bạn cần tách ra làm 02 commit riêng lẻ, 01 commit cho format code, 01 commit cho việc add thêm business logic. ### Tự động hóa quá trình review codeTrên thị trường có nhiều công cụ review code tự động có thể kể đến SonarQube, ESLint, SwiftLint, JSHint, Codestriker, Deepscan etc... Tùy thuộc dự án và chi phí mà bạn tìm công cụ phù hợp với mình. Đa phần các công cụ này đều rất dễ dàng tích hợp vào CI/CD, developer có thể nhận được phản hồi rất sớm sau khi commit code (hoặc trước khi commit code như sử dụng precommit kết hợp Linter). Việc này cũng làm giảm tải đáng kể cho người review. Tuy nhiên những công cụ này không hoàn toàn thay thế cho con người được, có thể kể đến những nguyên tắc mà team bạn đề ra, lấy ví dụ như SOLID principles, bạn cần yếu tố con người để review khía cạnh này một cách chuẩn xác hơn. ### Developer cần hiểu rõ trách nhiệm Dù review code giúp nâng cao chất lượng code và giảm thiểu lỗi, tuy nhiên không ai chắc chắn 100% phương pháp này sẽ loại bỏ hoàn toàn những lỗi tiềm tàng. Bạn với tư cách là người viết code cần chịu trách nhiệm cao nhất về tính đúng đắn của code. ## Kết bài Code review là một hoạt động thú vị nhưng mất thời gian và công sức. Tuy nhiên kết quả có tốt hay không còn phụ thuộc vào sự cam kết của team bạn. Team bạn áp dụng quy trình review code như thế nào? Nếu bạn thấy bài viết này còn thiếu sót hay cần bổ sung gì thì đừng ngần ngại liên hệ với mình qua các kênh mxh bên dưới nhé. Chúc bạn và team có những buổi review code hiệu quả. Bài viết gốc [tại đây](https://anhtuank7c.dev/blog/review-code)
anhtuank7c
1,726,647
I trained a Web Component GPT, but it is not perfect
It is January 2024 We can now create our own GPTs trained with an instruction set. I spent a whole...
0
2024-01-13T21:14:41
https://dev.to/dannyengelman/i-trained-a-web-component-gpt-but-it-is-not-perfect-2i5f
webcomponents, javascript, dom, frontend
It is January 2024 We can now create our own GPTs trained with an instruction set. I spent _a whole day_ writing instructions what code GPT should and should not create as JavaScript Web Component output It still is far from perfect. But this is getting close to what I expect my students to write. https://chatgpt-web-component.github.io/ <br> <br><br><hr> <br> {% jsfiddle https://jsfiddle.net/WebComponents/62f1eLay result,html,js %}
dannyengelman
1,726,863
Audiaire – Plugins Bundle (Windows) Download
Discover a world of sonic possibilities with the Audiaire Plugins Bundle for Windows, available now...
0
2024-01-14T09:08:18
https://dev.to/premiumplugins/audiaire-plugins-bundle-windows-download-42e0
musicmakers, windowsmusicsoftware, audioproduction, windowssoftware
Discover a world of sonic possibilities with the Audiaire Plugins Bundle for Windows, available now on our site **[Pluginsforest.com](https://telegra.ph/Audiaire--Plugins-Bundle-Windows-Download-01-14)**. Elevate your music production to new heights with this comprehensive collection of cutting-edge audio plugins designed to inspire creativity and enhance your sound. The Audiaire Plugins Bundle combines a powerful suite of virtual instruments and effects, carefully crafted to meet the demands of modern music producers, sound designers, and electronic musicians. Whether you're producing electronic, ambient, cinematic, or experimental music, Audiaire's innovative tools offer a versatile array of sounds to explore. Key Features: Zone Synthesizer: Unleash your creativity with a highly customizable and intuitive wavetable synthesizer. Craft intricate sounds with ease using Zone's advanced features and modulation capabilities. Nuxx Multi-Effects Processor: Elevate your tracks with Nuxx, a multi-effects processor that adds depth and character to your sound. Experiment with its dynamic sequencer and extensive modulation options to achieve unique textures. Zenith Polyphonic Synthesizer: Dive into the world of classic analog synthesis with Zenith. This polyphonic synthesizer combines vintage warmth with modern flexibility, providing a wide palette of sounds for your productions. Substance Bass Engine: Unleash powerful basslines with Substance, an innovative bass engine designed for electronic music. Sculpt your low-end frequencies with precision and add weight to your tracks. Zone Expansions: Expand your sonic arsenal with additional Zone Expansions, offering a diverse range of presets and sounds curated by top sound designers. Unlock a universe of sonic exploration and take your music production to the next level with the Audiaire Plugins Bundle. Compatible with Windows, these plugins seamlessly integrate into your digital audio workstation, providing a streamlined and efficient workflow. Visit our site [https://pluginsforest.com/product/audiaire-plugins-bundle-windows/] to learn more and elevate your music production experience with Audiaire.
premiumplugins
1,726,872
Essential Linux Commands for Storage Monitoring
Introduction A fundamental aspect of Linux system administration is managing disk space...
0
2024-01-14T10:08:12
https://dev.to/duroemmanuel/essential-linux-commands-for-storage-monitoring-1bl5
## Introduction A fundamental aspect of Linux system administration is managing disk space effectively. Understanding how to gauge and monitor available disk space is crucial for maintaining system performance and ensuring the seamless operation of applications. **In this tutorial, we'll delve into four valuable Linux commands designed to facilitate the monitoring of remaining disk space**. We'll introduce each command alongside practical examples, simplifying the process of disk space management. Let's examine the possible uses for these commands in a Linux setting. ## df - Get the Big Picture The df command, short for disk-free, offers a bird's-eye view of a Linux system's disk usage. It provides information about available space on mounted filesystems. Let's see an example of what the vanilla (no optional flags) command looks like: ``` df ``` This command, without any switches, displays information about all mounted filesystems. It shows the filesystem's device, total size, used space, available space, and usage percentage: ``` df Filesystem 1K-blocks Used Available Use% Mounted on /dev/sda1 20971520 5452595 15414067 26% / /dev/sdb1 4718592 4299161 397312 91% /data ``` In this output, we can see details about various filesystems, including their usage percentages, available space, and mount points. But the output from this flagless command is a bit tough to comprehend. To circumvent this, we can use the -h flag. The -h switch is an alias for --human-readable, which prints the sizes in human-readable format (e.g., 4.5G for 4.5 gigabytes, 388M for 388 megabytes): ``` df -h Filesystem Size Used Avail Use% Mounted on /dev/sda1 20G 5.2G 14.7G 26% / /dev/sdb1 4.5G 4.1G 388M 91% /data ``` To further see the filesystem type as a separate column, we can use the -t or --print-type switch. This switch adds a column to the output, indicating the filesystem type: ``` df -t Filesystem Type Size Used Avail Use% Mounted on /dev/sda1 ext4 20G 5.2G 14.7G 26% / /dev/sdb1 tempfs 4.5G 4.1G 388M 91% /data ``` From the output above, we see that the /data partition has 91% utilization, therefore it is nearing its limit. We could investigate the contents of this partition to find the reason behind its size. ## du - Dive into Directory Usage Sometimes, we need to drill down into the specifics of a directory's disk usage. This is where the du (disk usage) command comes in handy. We can use the following standard syntax to check the space used by a specific directory: ``` du /path/to/directory ``` Running the command without any options will produce a lengthy, unsorted output that lacks clear indications of the problem's location. To address this, we can enhance the output format by adding specific options to the du command: ``` du -sh /path/to/directory ``` Here, the -s or --summarize option provides the total disk space used by a particular file or directory. and the -h or --human-readable flag prints the size in human-readable format. Let's see an example output after running the command on the previously mentioned /data folder: ``` du -sh /data 50M /data/school 100M /data/work 3.9G /data/misc 4.1G /data ``` From the output, we see that there are three directories inside the /data partition. Out of the three, the misc folder seems to have the biggest size. Maybe it'll be a good idea to delete some files from this directory. If we want to dive a little bit deeper into the directory structure and only calculate disk usage up to a certain directory level, we can use the --max-depth switch. Moreover, it provides useful information when we want to just limit our focus and not delve into complex directory structures. Let's see an example of using the --max-depth switch for depth equals 1: ``` # Running in the current directory for depth=1 du -h --max-depth=1 4.1G /a 1.2G /b ``` Now let's run the same command in the same directory, but now for depth equals 2: ``` Running in the current directory for depth=2 du -h --max-depth=2 1G /a/a1 3.1G /a/a2 4.1G /a 500M /b/b1 730M /b/b2 1.2G /b ``` As we can see from the last two example outputs, increasing the depth has provided us with a detailed idea of the directory structure and which folders hold how much space in the filesystem. In this case, changing the maximum depth can therefore enable us to make informed decisions on which directories to investigate for resolving space issues. ## ls - Size Insights with File Listings The ls command is a versatile tool for listing files and directories within a directory. By adding a few options, we can reveal the sizes of these items. We can add the directory we want information about with the ls command: ``` ls /path/to/directory ``` Without any options, the ls command only displays the files and directories of the mentioned directory. If we omit the directory path also, then it just shows the files and directories in the current directory. ``` ls file1.txt file2.txt folder1 folder2 ``` This information alone is not useful, as we aren't getting any necessary information about storage management. To see the full power of the ls command, we can add additional flags. Let's see an example output after running the command with additional flags in the current directory: ``` ls -lah /home/user/documents -rw-r--r-- 1 user user 3.5M Oct 10 10:00 file.txt -rw-r--r-- 1 user user 1.2G Oct 10 10:01 bigapp.bin ``` The -la flag lists the entire contents, along with their size, for the particular directory provided. And again, -h is for the human-readable format. From the output for running the command for the `/home/user/documents` directory, we can see that bigapp.bin is indeed taking up a large space (1.2 gigabytes). If the file isn't that important, we could remove it to free up a good chunk of space. Another very useful switch is the -t, which allows us to identify older or less frequently used files. This sorts the files according to their modification time and helps us recognize files that may be candidates for removal: ``` $ls -lt /another/directory -rw-r--r-- 1 user user 2.3M Oct 21 10:00 important_file.txt drwxr-xr-x 2 user user 4.0K Oct 21 09:30 my_directory -rw-r--r-- 1 user user 1.5M Feb 20 15:45 old_file.txt ``` In this example, the old_file.txt seems to have the earliest modification time. Therefore it signals to us that this file hasn't been used for a long time, and therefore we can remove it. ## Conclusion Monitoring disk space regularly in Linux isn't just a good practice; it's a necessity to keep the system running smoothly. Moreover, it helps prevent potential issues such as system slowdowns, data loss, and unexpected interruptions. **In this tutorial, we learned about four useful commands: df, du, ls, and fdisk**. These create a complete toolkit to manage storage on Linux machines effectively. To summarize, we discussed the capabilities of each of the commands, learned about their use cases, some of the necessary options to get the most out of them, and also saw practical examples on how to use them.
duroemmanuel
1,727,202
The Art of code understanding: Going beyond copy and paste practices in Programming
In the fast-paced world of programming, the ability to understand code swiftly is a valuable skill...
0
2024-01-14T16:27:43
https://dev.to/igbodi/the-art-of-code-understanding-going-beyond-copy-and-paste-practices-in-programming-1fdn
programming, codenewbie, webdev
In the fast-paced world of programming, the ability to understand code swiftly is a valuable skill that can set you apart as a proficient developer. While copy-pasting code snippets may seem like a shortcut to getting things done, it's essential to recognize that true mastery of programming requires a deeper understanding of the code you're working with. **LEARN THE BASICS** Before you tackle complex code, make sure you have a good understanding of basic programming concepts like variables, loops, and functions. This knowledge forms the foundation for grasping more advanced code. **BREAK DOWN CODES INTO COMPONENTS** When faced with a piece of code, break it down into smaller components. Identify variables, loops, conditionals, and functions. Understanding each element's purpose and how they interact is key to grasping the overall functionality. **READ DOCUMENTATION** Don't shy away from documentation. Whether it's for a programming language or a specific library, documentation provides crucial insights into how code should be used. Skimming through documentation can save you from blindly copy-pasting code that may not suit your needs. **EXPERIMENT WITH CODE** Hands-on experience is invaluable. Modify existing code to see how changes affect the program. This experimentation not only reinforces your understanding but also builds confidence in your ability to manipulate code. **EMBRACE PROBLEM-SOLVING** Programming is inherently problem-solving. Approach coding challenges with a mindset of understanding the problem first, and then devising a solution. Resist the urge to immediately search for and copy-paste solutions without grasping the underlying logic. **LEARN FROM OTHERS** Reviewing other people's code can provide diverse perspectives and approaches. Open-source projects on platforms like GitHub are gold mines for learning. Analyzing well-written code can enhance your coding skills and teach you best practices. **ASK FOR HELP** When facing challenges, don't hesitate to seek help from the programming community. Engaging in forums, discussion boards, or even with colleagues can provide valuable insights. Remember, asking questions is a sign of curiosity and a willingness to learn. **UNDERSTAND THE WHY, NOT JUST THE HOW** Instead of simply copying code, make an effort to understand the 'why' behind each line. Knowing the reasoning behind the implementation allows you to adapt and troubleshoot effectively. **PRACTICE PATIENT** Learning to understand code takes time and patience. Avoid the temptation to rush through lessons or projects. Each challenge you encounter is an opportunity to grow as a programmer. **BUILD PROJECTS** Apply your knowledge by working on real projects. Building something from scratch, even if it's a small program, reinforces your understanding and hones your problem-solving skills. **IN CONCLUSION,** while copy-pasting code may offer quick solutions, the true essence of programming lies in understanding the logic and structure behind the code. Strive to be a programmer who not only writes code but comprehends it deeply – it's a journey that will ultimately make you a more versatile and effective developer.
igbodi
1,727,220
BEST BITCOIN RECOVERY SERVICE TO RECOVER SCAMMED BITCOIN HIRE / ADWARE RECOVERY SPECIALIST
Losing $40,000 is a gut-wrenching experience. The pit in your stomach, the sleepless nights, the...
0
2024-01-14T17:18:01
https://dev.to/eriannaesfahani/best-bitcoin-recovery-service-to-recover-scammed-bitcoin-hire-adware-recovery-specialist-lj5
Losing $40,000 is a gut-wrenching experience. The pit in your stomach, the sleepless nights, the constant replaying of "what if" scenarios – it's a storm of emotions that can leave you feeling helpless. My own foray into this financial nightmare unfolded when I fell victim to an elaborate online scam. $40,000, gone. Just like that. The initial shock gave way to a desperate scramble for answers. Hours spent scouring the internet, countless calls to banks and authorities, all leading to dead ends. The frustration and fear gnawed at me, and with each passing day, hope dwindled. Just as I was about to resign myself to my unfortunate fate, a light appeared: ADWARE RECOVERY SPECIALIST . Skeptical but clinging to a last thread of hope, I reached out. From the very first contact, it was different. They didn't sugarcoat the situation, but their empathy and professionalism were a balm to my battered spirits. The recovery process, however, was far from smooth. The scammers had covered their tracks well, leaving a tangled web of digital breadcrumbs. The challenges were immense: Complexities of the scam: The perpetrators had used a sophisticated scheme, involving offshore accounts and cryptocurrency transfers. Unraveling it required expertise beyond my own tech savvy. Limited information: visit their website: adwarerecoveryspecialist.expert With little to trace, the investigation relied heavily on ADWARE RECOVERY SPECIALIST meticulous analysis and resourcefulness. Their ability to think outside the box was crucial. Time constraints: Every passing day meant the trail grew colder, increasing the chances of my money vanishing forever. The pressure was immense, yet ADWARE RECOVERY SPECIALIST never wavered in their dedication. Through it all, ADWARE RECOVERY SPECIALIST was my constant anchor. They kept me informed of every step, explained the complexities in layman's terms, and most importantly, never gave up hope. Their unwavering belief in my case inspired me to keep fighting. And then, the breakthrough, following weeks of unrelenting pursuit. ADWARE RECOVERY SPECIALIST located a critical piece of information that let them locate the offender's virtual hideout. After a protracted and stressful battle, the good folks prevailed in the end. My forty thousand dollars had been laboriously reclaimed from the thieves' hands and was once again within my digital reach. The relief was overwhelming. My nightmare had ended, replaced by an immense gratitude for the team at ADWARE RECOVERY SPECIALIST. They weren't just recovery specialists; they were my digital knights in shining armor, wielding their expertise and resilience to restore what was lost. Direct Email: Adwarerecoveryspecialist@auctioneer.net Regards. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v4miapf0jmhxyo36s0fn.jpg)
eriannaesfahani
1,727,253
How can you learn to code and get a job?
If you want to learn how to code, you don't need a fancy degree or certificate – just grab a computer...
0
2024-01-14T17:35:53
https://dev.to/horsecoder/how-can-you-learn-to-code-and-get-a-job-4mc4
programming, career, learning, beginners
If you want to learn how to code, you don't need a fancy degree or certificate – just grab a computer and get online. _Here's a simple PLAN that works:_ 1. Stick with one course until you finish it. Don't hop around. Do your <u>homework</u> before picking your course. 2. Every morning, spend 90 minutes learning. After breakfast, set a timer and watch your course. Try out the code they show you – it makes you feel like you're <u>really doing</u> something. 3. Don't take breaks during your 90-minute session. Do some <u>light exercises</u> before you start to get your brain ready. 4. After about 15-20 days, you'll feel more confident. Keep going, and after <u>two or three months</u> of steady work, think about looking for a job. The first few days are tough, but once you get into the groove, it gets easier. Thanks!
codewithshahan
1,727,265
iOS and Android devs, how do you create all sizes of icon for AppStore?
I have a solution that generates all sizes required with just one click. Would be interested to know...
0
2024-01-14T17:48:00
https://dev.to/lksngy/ios-and-android-devs-how-do-you-create-all-sizes-of-icon-for-appstore-f55
ios, mobile, android
I have a solution that generates all sizes required with just one click. Would be interested to know how do you handle it now. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/2a240d5iyir4d5km7eaw.png) ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/glp6qcm2nfx4y81cn5ns.png)
lksngy
1,727,318
Stuck in Fun Land and unable to program my game
Both figuratively and literally, I'm stuck in Fun Land and unable to work on my game engine. I guess...
24,703
2024-01-15T04:00:07
https://dev.to/jacklehamster/stuck-in-fun-land-and-unable-to-program-my-game-5273
gamedev, napl, javascript
Both figuratively and literally, I'm stuck in Fun Land and unable to work on my game engine. I guess I could have brought my laptop and code in the corner, but my niece would think I'm a big anti-social nerd. So instead, I'll be wiping out a new dev log post on my phone, and be a small anti-social nerd. Now this is also an issue that applies to developing a game engine on your own. # Stuck in Funland I've been doing nice progress on my game engine. Yet, I'm nowhere close to having a game. The reason? That's right, I'm stuck in fun land. One thing game devs forget when working on their own, as opposed to be part of a large team, is that there's a bunch of boring stuff that needs to be worked on. Somebody has to be doing those. So while my game engine still didn't have a game loop, I spent a large part of my gamedev time: - adding fun shader effects - refactoring my code to make it look nice - tweaking the controls to make it look smooth when navigating an empty world - optimizing performance even though there are two objects in my scene. - just staring at the screen. (Takes up 90% of my time) While it's still a step up from being stuck in "bug land", you'd still want to move on from it, but the dangerous part is that you might not want to. # Find a definite goal While I was working on that game engine, I took a detour doing a game jam and completed a whole project. None of that work involved that engine I spent month building, yet it turned out pretty decent. {% youtube https://www.youtube.com/watch?v=5l9ktxTSK2Y %} This told me that I can't just build my engine in a vacuum, imagining what features could be useful. I need a specific game that the engine can produce, and build the engine towards that game. # The new game direction For the first project, I am now trying to build a JRpg, modeled after the first Phantasy Star. {% youtube https://www.youtube.com/watch?v=C-B9D9P5Epo %} That might look ambitious, but I already made a project that resembles that before. As I set my specific game goal, I immediately stepped out from Funland and already started tackling one task I dreaded to do: the menu interface! While that's not super exciting to build, it's needed, at least for a game like Phantasy Star. By just copying that game's interface, I can at least cut down time on decision making (which is the 90% of time staring at the screen part). Do yeah, huge time saver. I can go back to be more creative once I have the first game out of my engine. # Main takeaway Being stuck on fun land might be pleasant, but as far as contributing to finish a game, it could be as bad as "bug land". At least, you know you want to get out of "bug land" so it's a drive to finish the game, whereas being stuck in "fun land" is a trap. You might want to stay there forever. But game dev is like any other work, you sometimes need to do the dirty work, unless you're able to work in a team where someone else is willing, or hopefully enjoys, doing that dirty work.
jacklehamster
1,727,425
DOM to JSON and back
We can persist and rehydrate DOM objects with simple vanilla JS. 5-6 minutes, 1332 words, 4th...
25,376
2024-01-14T22:26:07
https://craft-code.dev/essays/connection/dom-to-json-and-back
craftcode, webdev, javascript, dom
**We can persist and rehydrate DOM objects with simple vanilla JS.** _5-6 minutes, 1332 words, 4th grade_ Here we will create **two simple but powerful JavaScript functions.** The first, `jsToDom`, will take a JavaScript (or <abbr title="JavaScript Object Notation">JSON</abbr>) object and turn it into a <abbr title="Document Object Model">DOM</abbr> object that we can insert into our page. The second, `domToJs`, does the reverse. It takes a DOM object and converts it to <abbr title="JavaScript">JS</abbr>. Now we can stringify it and persist it in our database. And rehydrate it at will. As you may know, JSON does not recognize functions. So we will need a way to deal with our event listeners. Donʼt worry! Weʼve got it covered. Check out our [example of DOM to JSON and back](https://sitebender.io/examples/dom-from-json/index.html) in action. We will explain it below. > A key axiom of Craft Code is [less is more](https://craft-code.dev/axioms/less-is-more). This influences several of the Craft Code [methods](https://craft-code.dev/methods), such as [code just in time](https://craft-code.dev/methods/code-just-in-time) and [keep it simple](https://craft-code.dev/methods/keep-it-simple). > > This means that we donʼt rush to load up dozens of frameworks, libraries, and other dependencies. Instead, we start with nothing: **zero dependencies.** > > We build the site structure with semantically-correct and accessible <abbr title="HyperText Markup Language">HTML</abbr>. Then we add just enough <abbr title="Cascading Style Sheets">CSS</abbr> to make it attractive, user-friendly, and responsive. > > Finally, we add JavaScript to **progressively enhance** the user experience. _But only if we need it._ > > This vanilla approach works very well. One goal of the Craft Code effort is to see how far we can go before we have to add a dependency on someone elseʼs code. ## The code For the impatient, letʼs take a look at the final code. Then weʼll explain it. The next two functions are the totality of the module. The rest are specific to this example. **Note: this is about concepts, not production-ready code. This is a first pass and might could use some refactoring.** <abbr title="Your Mileage May Vary">YMMV</abbr>. ```js // ./modules/js-to-dom.js export default async function jsToDom (js) { const { attributes, children, events, tagName } = js const elem = document.createElement(tagName) for (const attr in attributes) { elem.setAttribute(attr, attributes[attr]) } if (Array.isArray(children)) { for (const child of children) { typeof child === "object" ? elem.appendChild(await jsToDom(child)) : elem.appendChild(document.createTextNode(child)) } } if (events) { for (const key in events) { if (!events[key]) { break } const handler = typeof events[key] === "function" ? events[key] : (await import(`./${events[key]}.js`)).default handler && elem.addEventListener(key, handler) } setDataEvents(elem, js.events) } return elem } function setDataEvents (elem, obj = {}) { const eventString = Object.keys(obj) .reduce((out, key) => { if (typeof obj[key] === "string") { out.push(`${key}:${obj[key]}`) } return out }, []) .join(",") if (eventString) { elem.setAttribute("data-events", eventString) } } ``` We grab the `tagName` from the JSON and create a DOM element of that type. Then we add the attributes to that element. Then we apply `jsToDom` recursively on the `children`, appending them to the element. The `events` object is pretty clever, <abbr title="In Our Humble Opinion">IOHO</abbr>. The keys are the names of the events (e.g., `click`) and the values are the names of the handler functions. We will import those functions only when needed. See an example below. We also create a string representation of the events object. We could have simply stringified it, but we wanted to make it human readable, so we wrote our own (lines #39 to #52). ```js // ./modules/dom-to-js.js export default function domToJs (dom) { const { attributes, childNodes, tagName } = dom const eventList = dom.getAttribute("data-events") const events = eventList?.split(",").reduce((out, evt) => { const [key, value] = evt.split(":") if (key) { out[key] = value } return out }, {}) const attrs = Object.values(attributes) .map((v) => v.localName) .filter((name) => name !== "data-events") return { tagName, attributes: attrs.reduce((out, attr) => { out[attr] = dom.getAttribute(attr) return out }, {}), events, children: Array.from(childNodes).map((_, idx) => { const child = childNodes[idx] return child.nodeType === Node.TEXT_NODE ? child.nodeValue : domToJs(child) }), } } ``` This is a bit tricky as _nothing_ in the DOM is simple. Go figure. We explain below. This extracts the `attributes`, the `childNodes`, and `tagName` from the passed DOM element. Then we use these to create a simple JS/JSON object, recursing through the child nodes. We pull the `data-events` attribute out and treat it separately. We parse the value back into an actual object and add it at the `events` key. The output is JS, but we can stringify it to JSON as required. The JSON shown below is a typical example. It creates our test form. ```json { "tagName": "FORM", "attributes": { "action": "#", "method": "POST", "name": "form" }, "events": { "focusin": "log", "submit": "parse-submission" }, "children": [ { "tagName": "TEXTAREA", "attributes": { "data-type": "json", "name": "json" }, "children": [ "{\"tagName\":\"DIV\",\"attributes\":{\"class\":\"sb-test\",\"data-type\":\"string\",\"id\":\"sb-test-id\"},\"events\":{\"click\":\"log\"},\"children\":[{\"tagName\":\"STRONG\",\"children\":[\"Bob's yer uncle.\"]}]}" ] }, { "tagName": "BUTTON", "attributes": { "aria-label": "Run this baby", "type": "submit" }, "children": [ "Run" ] } ] } ``` We import this JSON and pass it to `jsToDom` in our `index.js` file below. ```js // index.js import jsToDom from "./modules/js-to-dom.js" import formJson from "./modules/form-json.js" import outJson from "./modules/out-json.js" export async function injectForm () { const main = document.querySelector("main") main.appendChild( await jsToDom(outJson), ) main.appendChild( await jsToDom(formJson), ) } globalThis.addEventListener("DOMContentLoaded", injectForm) ``` Pretty self-explanatory. Now, how do we handle our events? Simple. We take our `events` object from the passed JSON/JS. Then we loop through the keys, which are the event types. The values are the **names** of the handler functions. We add an event listener for each type and assign it the default function from the module with that name. For example, our output `div` gets a `click` handler called “log”. This function is in `./modules/log.js`. We import the handler: `(await import("./log.js")).default`. We assign it to `handler`. Then we add it like this: `addEventListener("click", handler)`. Drop dead simple. And we only import the modules that we need. See the actual `log` handler below. ```js // ./modules/log.js export default function log ({ target }) { console.log( target?.tagName, target?.innerText || target?.value ) } ``` Kinda dumb, but it is merely an example. We add this `log` function as a `click` handler on our output `strong` element and as a `focusin` handler on our `form`. The `submit` handler for the form is a bit more exciting: ```js // ./modules/parse-submission.js import domToJs from "./dom-to-js.js" import jsToDom from "./js-to-dom.js" export default async function (event) { event.preventDefault() const form = event.target const textarea = form.querySelector("textarea") const out = document.querySelector(".out") const js = JSON.parse(textarea.value) out.appendChild(await jsToDom(js)) const newForm = domToJs(form) document.querySelector("main").appendChild( await jsToDom(newForm) ) } ``` We canʼt store functions in JSON, so we put them into modules. Then we will import them as needed when we rehydrate the DOM elements. We attach `parseSubmission` as the `submit` handler for our `form` element. ## What can we do with this? Ooo. All sorts of cool things. ### Easy element creation Instead of messing around with `createElement`, `setAttribute`, etc., we can use `jsToDom`. We pass it a JS or JSON object representing the DOM elements we want. We create handler functions ahead of time in modules. When we need an event listener, `jsToDom` imports it _just in time_ and assigns it to the element. This works like Reactʼs `createElement` function. Or a library such as [hyperscript](https://github.com/hyperhype/hyperscript). Sure, weʼd prefer <abbr title="JavaScript XML">JSX</abbr> for its much reduced cognitive load. But our alternative here is the DOM methods such as `createElement`. Unless we want to load up a bulky library such as React, that is. We donʼt. Suppose I wanted to inject a password field with a show/hide button. First, we create a toggle handler such as this: ```js // ./modules/toggle-visibility.js export default function (event) { const button = event.target const div = button.closest(".form-field") const input = div?.querySelector("input") if (input) { if (input.type === "password") { input.type = "text" button.innerText = "hide" button.setAttribute("aria-label", "Hide password.") return } input.type = "password" button.innerText = "show" button.setAttribute("aria-label", "Show password.") } } ``` Now I can call `jsToDom` with the following JSON and it will create my password input. Try pasting it into the [example form](http://sitebender.io/examples/dom-from-json/index.html). Remember that the click event handler is already available at `./modules/toggle-visibility.js`. ```json { "tagName": "DIV", "attributes": { "class": "form-field" }, "events": { "click": "log" }, "children": [ { "tagName": "INPUT", "attributes": { "type": "password" } }, { "tagName": "BUTTON", "attributes": { "aria-label": "Show password.", "class": "xx-toggle-password", "type": "button" }, "events": { "click": "toggle-visibility" }, "children": ["show"] } ] } ``` We hope that it is straightforward how all this works. ### Easy element persistance What if I want to save current UI state? We can use `domToJs` to do just that. We took the example page (linked above) and passed the `html` element to `domToJs`. Then we stringified it. Now we have preserved both the `head` and `body` elements. So we can take a blank HTML document like this: ```html <html lang="en"> <head> </head> <body> <script src="./modules/make-page.js" type="module"></script> </body> </html> ``` And we can use our persisted JSON `head` and `body` elements to create the page on the fly. We can store the JSON in a database or load it from an API. Below is the code minus the JSON. Or view the [actual code.](http://sitebender.io/examples/dom-from-json/modules/make-page.js) Then [see it in action.](https://sitebender.io/examples/dom-from-json/make-page.html) View the source on that page to see what we mean. ```js import jsToDom from "./js-to-dom.js" globalThis.addEventListener("DOMContentLoaded", async () => { const h = document.documentElement.querySelector("head") const b = document.documentElement.querySelector("body") h.replaceWith(await jsToDom(/* head JSON here */)) b.replaceWith(await jsToDom(/* body JSON here */)) }) ``` A whole page generated from simple JSON! ### Data-driven user interfaces One idea that we have been promoting for several years now is that of a data-driven interface. Weʼve got an article in the pipeline on that coming soon. But we can give a quick overview here. The idea is quite simple. The easiest example to visualize is automated form rendering. **Forms collect data.** Typically, that data is then persisted in a database. **Databases have schemas.** That means that the database already knows the types it expects. If we have defined our schema well, then it knows those types precisely. **Particular data use particular interface widgets.** For example, an email address would use an `input` element of type “email”. An integer might use an input element of type “number” with its `step` attribute set to “1”. An enum might use a `select` or `radio` buttons or a `checkbox` group. From this schema we should be able to determine how to both **display and validate** that data. After all, there is only a small number of widgets. So what if the response to our database query for the data included the schema? Some GraphQL queries already make this possible. From that schema, we can generate validation functions. _And_ we know which widgets to display. **So we can generate our form and our validators automatically.** Best of all, we have a single source of truth: our database schema. In a coming article, we will explain how easy it is to achieve this. (There is an advanced version of this that uses a [SHACL](https://en.wikipedia.org/wiki/SHACL) ontology and a [triple store](https://en.wikipedia.org/wiki/Triplestore) such as [Fuseki](https://jena.apache.org/documentation/fuseki2/). We then use [SPARQL](https://en.wikipedia.org/wiki/SPARQL) queries to generate the HTML, CSS, and JS straight from the database. Wee ha! Weʼll get to that sometime soon, too. Promise.) If anyone requests it, weʼll give a detailed explanation of the above code in a separate article. This one is long enough.
chasm
1,727,475
Git Branch Naming Strategies
Good branch naming in Git is essential for effective project management. This post explores the types...
0
2024-01-15T01:11:36
https://dev.to/marmariadev/git-branch-naming-strategies-enhancing-software-project-management-5a6a
webdev, programming, git, github
Good branch naming in Git is essential for effective project management. This post explores the types of branches in Git and offers tips for naming them effectively. ## 1. Main Branches - `main` or `master`: This is the main branch where the source code reflects the current production state. - `develop` or `dev`: In some workflows, especially in Git Flow, a separate development branch is used to accumulate features before passing them to main. ## 2. Feature Branches (Feature Branches) These branches are used to develop new features or changes in the code. They are usually named in a way that reflects the feature or work being done. For example: - **feature/login-authentication** - **feature/new-ui-layout** - **feature/add-user-profile** ## 3. Bug Fix Branches (Bugfix Branches) For error correction, branches can follow a pattern similar to feature branches but highlighting that they are focused on corrections. For example: - **bugfix/login-error** - **bugfix/missing-icons** - **bugfix/404-page-not-found** ## 4. Release Branches (Release Branches) Used to prepare releases, these branches are often named according to the version of the release. For example: - **release/v1.2.0** - **release/v1.2.1** ## 5. Maintenance or Patch Branches (Hotfix Branches) For urgent changes that need to be deployed quickly in production, maintenance or patch branches are used: - **hotfix/critical-login-issue** - **hotfix/payment-processing-error** ## 6. Personal or Experimental Branches Developers sometimes create branches for experimental work or testing, and these can carry more personal or descriptive names of the experiment: - **experiment/new-framework-test** - **john/prototype-new-feature** ## Good Practices - Keep branch names short but descriptive. - Use hyphens to separate words. - Avoid special characters or spaces. - Include a task or ticket identifier if a tracking system is used (for example, feature/JIRA-123-add-login). Properly naming branches in Git is key to the clarity and efficiency of the project. These tips will help your team stay organized and collaborate more effectively.
marmariadev
1,727,794
Demystifying Regular Expressions: A Beginner's Guide 🧩
Regular expressions, often referred to as regex or regexp, are powerful tools for string manipulation and pattern matching in various programming languages. In this article, we'll unravel the basics of regex, providing a foundation for understanding and utilizing this essential skill in your coding endeavors.
0
2024-02-19T18:00:00
https://dev.to/amatisse/demystifying-regular-expressions-a-beginners-guide-5a06
tutorial, regex, regular, expression
--- title: Demystifying Regular Expressions: A Beginner's Guide 🧩 published: true description: Regular expressions, often referred to as regex or regexp, are powerful tools for string manipulation and pattern matching in various programming languages. In this article, we'll unravel the basics of regex, providing a foundation for understanding and utilizing this essential skill in your coding endeavors. tags: tutorial, regex, regular, expression cover_image: https://dev-to-uploads.s3.amazonaws.com/uploads/articles/t6t6lnkuajlguyallvuh.png # Use a ratio of 100:42 for best results. published_at: 2024-02-19 18:00 +0000 --- Regular expressions, often referred to as regex or regexp, are powerful tools for string manipulation and pattern matching in various programming languages. In this article, we'll unravel the basics of regex, providing a foundation for understanding and utilizing this essential skill in your coding endeavors. ## 1. **What is a Regular Expression?** A regular expression is a sequence of characters that forms a search pattern. It is used for pattern matching within strings and is a versatile tool for text processing. ## 2. **Basic Syntax** - **Literal Characters:** Match characters exactly as they appear. - Example: `/hello/` matches the sequence "hello" in a string. - **Metacharacters:** Have special meanings and are the building blocks of regex. - Example: `/^abc/` matches "abc" at the beginning of a string. ## 3. **Character Classes** - **Square Brackets:** Define a character class to match any one of the characters within the brackets. - Example: `/[aeiou]/` matches any vowel. - **Ranges:** Specify a range of characters within square brackets. - Example: `/[0-9]/` matches any digit. ## 4. **Quantifiers** - **`*`:** Matches zero or more occurrences of the preceding character or group. - Example: `/ab*c/` matches "ac", "abc", "abbc", and so on. - **`+`:** Matches one or more occurrences of the preceding character or group. - Example: `/ab+c/` matches "abc", "abbc", and so on. - **`?`:** Matches zero or one occurrence of the preceding character or group. - Example: `/ab?c/` matches "ac" and "abc". ## 5. **Anchors** - **`^`:** Matches the beginning of a string. - Example: `/^start/` matches "start" only at the beginning of a string. - **`$`:** Matches the end of a string. - Example: `/end$/` matches "end" only at the end of a string. ## 6. **Escape Characters** - **`\`:** Escapes a metacharacter, treating it as a literal character. - Example: `/a\*b/` matches "a*b". ## 7. **Wildcards** - **`.` (dot):** Matches any single character except for a newline. - Example: `/a.c/` matches "abc", "adc", and so on. ## 8. **Modifiers** - **`i`:** Performs case-insensitive matching. - Example: `/abc/i` matches "ABC", "aBc", and so on. ## 9. **Common Use Cases** - **Validation:** Use regex to validate input, such as email addresses or phone numbers. - **Search and Replace:** Quickly find and replace text patterns within a document. - **Data Extraction:** Extract specific information from strings, like extracting dates or numbers. ## Conclusion: Unlocking the Regex Puzzle 🎉 Regular expressions may seem like a puzzle at first, but mastering the basics opens the door to a world of powerful text manipulation. Practice, experiment, and gradually incorporate regex into your coding toolkit. As you become more familiar with its syntax and capabilities, regex will become an invaluable asset in your coding journey. Happy pattern matching! 🚀✨
amatisse
1,727,824
What's Gateway API and how to deploy on AWS?
Co-author: @coangha21 Gateway API is recently standing out to be a promising project that will...
0
2024-01-15T09:15:44
https://dev.to/haintkit/whats-gateway-api-and-how-to-deploy-on-aws-3ma1
gatewayapi, kubernetes, aws, vpclattice
Co-author: @coangha21 Gateway API is recently standing out to be a promising project that will change the way we manage traffic in Kubernetes. It is looking forward to being the next generation of APIs used for Ingress, Load Balancing, and Service Mesh functionalities. In today's blog, we will discuss what Gateway API is, what it offers and finally we will get our hands dirty to gain better understanding of the service. Let’s get started. **Gateway API overview** The Gateway API is a recently graduated (version 1.0 in October 2023) official Kubernetes project that aims to revolutionize L4 and L7 traffic routing within Kubernetes. The goal is to simplify and standardize the way ingress and load balancing are configured and managed, addressing limitations of existing solutions like Ingress and Service APIs. ![Gateway API logo](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3cv3xwtuanmaiexbc2wd.png) You can see above Gateway API logo, it already speak for it self, it illustrates the dual purpose of this API, enabling routing for both North-South (Ingress) and East-West (Mesh) traffic to share the same configuration. Now, let’s take a look at some of the key features that Gateway API offers: _**1. Extensible and Role-oriented:**_ - Unlike the single-purpose Ingress controller, Gateway API is designed with flexibility and specialization in mind. - It offers various resource types like Gateway, GatewayClass, HTTPRoute, GRPCRoute, and Policy that work together to define specific roles and capabilities for different networking tasks. - This allows for building sophisticated networking configurations with greater control and clarity. ![Gateway API is aiming for RBAC](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/caew00ny3k3w0j7dpwpn.png) **_2. Advanced Traffic Routing:_** - Gateway API goes beyond simple load balancing and provides powerful routing capabilities based on HTTP routing rules, path matching, headers, and even gRPC service names. - This facilitates setting up complex traffic destinations, traffic splitting, and A/B testing scenarios. ![Gateway API supports advance routing](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xr7c0k8fvvrqztt62ffv.png) **_3. Protocol-Aware and Scalable:_** - The API supports both L4 (TCP/UDP) and L7 (HTTP/gRPC) protocols, offering a unified platform for all your networking needs. - Additionally, it's designed for scalability and performance to handle large workloads and complex network topologies. **_4. Community-Driven and Evolving:_** - Gateway API is a community-driven project under the Kubernetes SIG Network, actively maintained and constantly evolving. - New features and capabilities are being added regularly, making it a future-proof solution for your Kubernetes networking needs. ![Kubernetes SIGs](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/x7j0gexuguc5yykzcu6l.png) From my point of view, Gateway API represents a significant leap forward in Kubernetes service networking. Its dynamic capabilities, flexible routing, and robust policy tools will empower developers and operators to manage external traffic with greater control, precision, and agility. If you have time, try it yourself, it will be “worth your time”. **What is the differences between Gateway API and Ingress?** While both Gateway API and Ingress manage traffic routing in Kubernetes, there are several key differences between the Gateway API and the traditional Ingress API, let’s go through some of them: **Functionality:** - **Ingress**: Primarily focused on exposing HTTP applications with a straightforward, declarative syntax. - **Gateway API**: A more general API for proxying traffic, supporting various protocols like HTTP, gRPC, and even different backend targets like buckets or functions. **Flexibility:** - **Ingress**: Limited configuration options with heavy reliance on annotations for advanced features. - **Gateway API**: More fine-grained control with dedicated objects for defining routes, listeners, and backends, promoting cleaner configuration and extensibility. **Protocol Support:** - **Ingress**: Only supports HTTP. - **Gateway API**: Supports multiple protocols beyond HTTP, like gRPC and WebSockets. ![Ingress support gRPC protocol](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wubfdv1f8pve1jxz1bxv.png) **Scalability:** - **Ingress:** Can become complex to scale, often requiring external load balancers or intricate configurations. - **Gateway API:** Designed with scalability in mind, easily integrating with various data plane implementations. **Security:** - **Ingress:** Limited built-in security features, primarily relying on annotations for authentication and authorization. - **Gateway API:** Supports extensions for implementing enhanced security features like authentication and authorization. ![Ingress vs Gateway API](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ynb30534qo87901qj5ck.png) **Other Differences:** - **Portability:** Gateway API configurations are more portable across data planes due to its separation of concerns. - **Management:** Gateway API allows for better cluster operator control with dedicated objects for managing various components. - **Maturity:** Ingress is a stable, GA (General Availability) API, while Gateway API is still under development but rapidly gaining traction. In summary, Ingress is a basic but mature solution for exposing simple HTTP applications in Kubernetes. Gateway API is a more powerful and flexible API that caters to diverse use cases, supports broader protocols, and scales more efficiently. It offers greater control and extensibility at the cost of slightly increased complexity. "Which one should I choose?", it depends on your use case: - For Ingress: If you need a simple solution for exposing an HTTP application and don't require advanced features. - For Gateway API: If you need flexibility for various protocols, backends, or require extensibility for security or advanced routing features. **Please keep in mind that, Gateway API is not meant to replace Ingress entirely, but rather provide a more comprehensive and future-proof option for complex traffic routing needs in Kubernetes.** **How to deploy Gateway API on AWS EKS** Finally, this is probably the part you are waiting for . Let’s deploy a Gateway API on our AWS EKS cluster. I will only show high-level steps that need to be done. For manifest deployment, please refer to [this repository](https://github.com/haicasgox/demo-gatewayapi.git). Architecture demo: we have 02 services (user and post). We use the picture of VPC Lattice and Gateway API for your mapping overview. ![[VPC Lattice and Gateway API](https://www.gateway-api-controller.eks.aws.dev/concepts/overview/)](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/jf7xy8nlws4625zjnv2o.png) You can see in the picture that the Gateway API is composed of three main components: GatewayClass(Controller), Gateway, HTTPRoute/GRPCRoute, each of them is related to VPC Lattice objects. **How it works** The AWS Gateway API controller (GatewayClass) integrates VPC Lattice with the Kubernetes Gateway API. When installed in your cluster, the controller watches for the creation of Gateway API resources such as gateways, routes, and provisions corresponding Amazon VPC Lattice objects. This enables users to configure VPC Lattice Service Networks using Kubernetes APIs, without needing to write custom code or manage sidecar proxies. The AWS Gateway API Controller is an open-source project and is fully supported by AWS team. Now let’s go through step by step to set this up on our EKS cluster. **Step by step guide:** - **Create GatewayClass:** First we need to create a GatewayClass (Gateway API controller), we will using AWS Gateway API controller. Before you create the GatewayClass, you need to setup 2 following things: - Setup security groups to allow all Pods communicating with VPC Lattice to allow traffic from the VPC Lattice managed prefix lists. - Create IRSA for Gateway API Controller. For those steps, please refer to [this link](https://www.gateway-api-controller.eks.aws.dev/guides/deploy/#using-eks-cluster). After all of that is done, we will create our first GatewayClass. You can find all manifests used in this demo [here](https://github.com/haicasgox/demo-gatewayapi.git). The outcome should be look like this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fbiy2or64xxnpfjwwj0j.png) - **Service networks (Gateway):** Next, we will create a Gateway. Gateway describes how traffic can be translated to Services within the cluster (through Load Balancer, in-cluster proxy, external hardware, etc.). In AWS, Gateway points to a [VPC Lattice service network](https://docs.aws.amazon.com/vpc-lattice/latest/ug/service-networks.html). Services associated with the service network can be authorized for discovery, connectivity, accessibility, and observability. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zwnughlmvw3xcbvo8rqx.png) - **Services and HTTPRoute:** Finally, we will define Services and Routes using K8s object Service and HTTPRoute to start routing traffic between services. Service: User ![Service: User](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hmmkdrp16zf0fba5skyp.png) Service: Post ![Service: Post](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/a64plcbhouzde7xp981x.png) Target groups for 02 services: ![Target groups for 02 services post and user](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/j2so8kvawrwat8yoyngy.png) - **Result:** Now let’s check if service “post” can called service “user” via domain name in VPC Lattice and vice versa. ![Service post calls service user via DNS provided by AWS Lattice](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4wsy2nz57wrczsid1t23.png) ![Service user calls service post via DNS provided by AWS Lattice](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/b079a8gcy5cobf4xxmsn.png) It's worked! **Conclusion** Even though Gateway API is new and on it way to accomplish, it’s already showing lots of potentials. With more features and improvement coming in the future, we can expect it to be the future of APIs used for Ingress, Load Balancing, and Service Mesh functionalities. References: 1. [Introduction - Kubernetes Gateway API](https://gateway-api.sigs.k8s.io/) 2. [AWS Gateway API Controller ](https://www.gateway-api-controller.eks.aws.dev/)
haintkit
1,728,066
Navigating the Seas: A Deep Dive into the World of Ships
Ships have been pivotal throughout history, shaping the course of nations and connecting cultures. In...
0
2024-01-15T11:59:51
https://dev.to/ukazer21/navigating-the-seas-a-deep-dive-into-the-world-of-ships-2p98
webdev, beginners, tutorial, react
Ships have been pivotal throughout history, shaping the course of nations and connecting cultures. In this article, we embark on a journey through the historical significance of ships and the ever-evolving landscape of shipbuilding technology. **_[read more](https://aitechai.blogspot.com/2024/01/navigating-seas-deep-dive-into-world-of.html)_**
ukazer21
1,728,398
Digital Foundations: Exploring the Criteria for Lucknow's Best Web Development Company
In the digital age, a robust online presence is the cornerstone of success for businesses. Lucknow,...
0
2024-01-15T17:49:35
https://dev.to/csalabs/digital-foundations-exploring-the-criteria-for-lucknows-best-web-development-company-54d
webdevlopmentcompanyinlucknow, webdev, webdesigning
In the digital age, a robust online presence is the cornerstone of success for businesses. Lucknow, with its burgeoning tech scene, hosts a plethora of web development companies. Let's delve into the essential criteria that define the **[best web development company in Lucknow](https://www.deviantart.com/csalabs/art/Best-Web-Development-Company-in-Lucknow-csalabs-961699574)**, shaping the digital foundations for businesses. ## Unwavering Commitment to Excellence The best web development companies in Lucknow set themselves apart by their unwavering commitment to excellence. From the initial concept to the final execution, these companies prioritize quality in every aspect of web development. Their dedication to delivering top-tier websites ensures client satisfaction and long-term success. ## Tailored Solutions for Diverse Needs Versatility is a hallmark of the best web development companies. Recognizing that each business is unique, they specialize in providing tailored solutions that cater to diverse needs. Whether it's a sleek corporate website, a dynamic e-commerce platform, or a cutting-edge web application, these companies possess the expertise to bring any vision to life. ### Collaboration as a Cornerstone Successful web development is a collaborative effort. The best companies in Lucknow understand the value of a strong client-developer partnership. Through open communication and involvement at every stage, they ensure that the end product not only meets but exceeds client expectations. Collaboration is the cornerstone upon which digital success is built. **Embracing Technological Advancements** Staying abreast of technological advancements is a defining trait of the best web development companies. Lucknow's top players adopt and integrate the latest tools, frameworks, and methodologies into their projects. This commitment to staying on the cutting edge ensures that the websites they develop are not only contemporary but also equipped for future advancements. ## Testimonials and Reputation Client testimonials are the digital footprints that attest to a company's competence. The best web development companies in Lucknow boast a trail of satisfied clients who have witnessed their projects come to life. A stellar reputation, backed by positive testimonials, speaks volumes about a company's ability to deliver on promises and meet client needs. ** ## Conclusion ** In the vibrant tech landscape of Lucknow, the best web development company is one that lays strong digital foundations for businesses. By prioritizing excellence, offering tailored solutions, fostering collaboration, embracing technology, and earning a stellar reputation, these companies pave the way for their clients' online success. As businesses in Lucknow embark on their digital journey, understanding and applying these criteria can lead them to the web development partner that will shape their digital destiny.
csalabs
1,728,167
Docker Security Best Practices: Safeguarding Containers with Privileges, Capabilities, and Resource Management
Docker has revolutionized the way we deploy and manage applications, providing a lightweight and...
0
2024-06-02T09:06:50
https://dev.to/ajeetraina/docker-security-best-practices-safeguarding-containers-with-privileges-capabilities-and-resource-management-2dk1
Docker has revolutionized the way we deploy and manage applications, providing a lightweight and portable solution for containerization. However, with great power comes great responsibility, especially when it comes to security. In this blog post, we will explore key Docker security best practices related to privileges, capabilities, and shared resources. ## 1. Forbid New Privileges Docker containers, by default, inherit the privileges of the host system. To enhance security, it's recommended to forbid new privileges within your containers. This prevents processes in the container from gaining additional privileges, which could potentially be exploited by malicious actors. You can achieve this by adding the --no-new-privileges flag to the Docker run command: ``` docker run --security-opt=no-new-privileges ... ``` This setting ensures that the container processes cannot obtain additional privileges during runtime. Here's a practical example to illustrate the concept of "no new privileged Ubuntu is not allowed": ## Scenario You want to run a web server application inside a Docker container for security reasons. ## Without the restriction: - You could create a privileged Ubuntu container using `docker run -it --privileged ubuntu bash`. - This would give the programs inside the container almost full control over the host system, including sensitive operations like accessing files or modifying network settings. While convenient, this poses security risks. ## With the restriction: - You'd instead use `docker run --security-opt=no-new-privileges -it ubuntu bash`. - This creates a container where processes cannot gain new privileges beyond those they start with. - The web server would run as a non-root user within the container, limiting its ability to perform potentially harmful actions. ## Benefits of the restriction: - Reduced attack surface: If a malicious actor exploits a vulnerability in the web server, they'd have limited access to the host system, making it harder to cause extensive damage. - Enhanced control: You can better manage which specific privileges the web server needs, reducing the risk of accidental misconfiguration or misuse. - Compliance: Certain security standards or regulations might mandate the use of restricted privileges in containerized environments. - By enforcing "no new privileges," you create a more secure and controlled environment for running applications within Docker containers. ## 2. Define Fine-Grained Capabilities Docker containers should only have the minimum set of capabilities required for their functionality. Fine-grained capabilities allow you to specify exactly what a container can or cannot do, reducing the attack surface. Consider defining capabilities explicitly in your Dockerfile or Docker Compose file. For example, you can drop all capabilities and then add only those necessary for your application: ``` # Drop all capabilities RUN setcap -r /usr/bin/myapp # Add specific capabilities RUN setcap cap_net_bind_service+ep /usr/bin/myapp ``` This approach ensures that your container runs with the least privilege necessary to perform its tasks. ## 3. Drop All Default Capabilities By default, Docker containers inherit a set of capabilities from the host. To enhance security, drop all default capabilities explicitly and add only the ones required. ``` # Drop all default capabilities RUN setcap -r /usr/bin/myapp ``` This prevents unnecessary privileges from being available to containerized processes. ## 4. Avoid Sharing Sensitive Filesystem Paths Sharing sensitive filesystem paths between the host and containers can expose critical information to potential attackers. Avoid mounting sensitive directories, especially those containing system binaries or configurations, into your containers. Instead, use Docker volumes to manage data persistence securely. This ensures that your containers only have access to the data they require without exposing critical system paths. ``` docker run -v /path/on/host:/path/in/container ... ``` ## 5. Use Control Groups to Limit Access to Resources Control Groups (cgroups) allow you to manage and limit the resources that containers can access. Utilize cgroups to restrict CPU, memory, and other resource usage for improved isolation and performance. Specify resource limits when running your containers: ``` docker run --cpu-shares 512 --memory 512m ... ``` This ensures that containers operate within defined resource boundaries, preventing resource exhaustion attacks. ## Conclusion Implementing these Docker security best practices helps mitigate potential security risks associated with privileges, capabilities, and shared resources. By following these guidelines, you can create more secure and robust containerized environments, safeguarding your applications and data from potential threats. Remember that security is an ongoing process, and staying informed about the latest best practices is crucial for maintaining a strong defense against evolving security threats in the containerized ecosystem.
ajeetraina
1,728,229
1 of 90 days to learning to become a better dev.
I am freelancer web dev, front-end developer with small experiences in the work field as finding...
0
2024-01-15T15:12:03
https://dev.to/markhov234/1-of-90-days-to-learning-to-become-a-better-dev-57gb
I am freelancer web dev, front-end developer with small experiences in the work field as finding something as a junior can be challenging. So as i am trying to reach my dream that is to be a web-dev or a full-stack developer, i thought learning DevOps could help me personally and professionally. **What is DevOps** : DevOps is a set of practices that help to reach the goal of this movement: reducing the time between the ideation phase of a product and its release in production to the end-user or whomever it could be an internal team or customer. _https://www.youtube.com/watch?v=Xrgk023l4lI_ I will be using this website for my journey :) _https://www.90daysofdevops.com/2022/day01/_
markhov234
1,728,262
API Guide to Setup OTP SMS Verification
Requirements If you want to send OTP SMS or enable SMS verification, you need to Register...
0
2024-01-15T15:52:37
https://dev.to/alinaj/api-guide-to-setup-otp-sms-verification-4ck2
otp, sms, smsverification
## Requirements If you want to send OTP SMS or enable [SMS verification][1], you need to Register and [create an account with Verify Now][2] console.messagecentral.com/signUp Have a valid balance in your account to send OTPs. You can use the test credits during integration to verify our services or test our [SMS verification services for free][3]. REST API Base URL’s: All Platform API endpoints below should be prefixed with the following URL: https://cpaas.messagecentral.com ** These are the following steps to successfully send an otp to a mobile number:** 1. Generate a token 2. Make a post request to our ‘send otp’ API and pass the token in header 3. Make a get request to our ‘validate otp’ API and pass the token in header ## How to generate a token? 1. Make a get request to auth/v1/authentication/token 2. Add the ‘customerId’ field. This is a unique id for every customer. 3. Add ‘country’ field. This is the country ISO code of the registered user. 4. Add ‘email’ field. This is the email address of the registered user. (It’s optional) 5. Add ‘key’ field. This is the Base 64 encrypted password of the registered user. 6. Add ‘scope’ field. This is a request to generate a ‘NEW’ token for the registered user. ** Sample response:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/e2hi9ub8ngea7k2f60qd.png) ## How to send OTP to a mobile number? 1. Make a post request to /verification/v2/verification/send 2. Add ‘customerId’ field. This is a unique Id for every customer. 3. Add ‘countryCode’ field. This is the country code of the mobile number who will receive the otp 4. Add ‘otpLength’ field. This is the OTP length. It’s an optional field and by default the value is 4, you can configure it up to 8 digits. 5. Add ‘mobileNumber’ field. This is the number that receives the otp. 6. Add ‘flowType’ field. Set this type to ‘SMS’ 7. Add ‘authToken’ generated using the token API and pass that as a header in send otp. ** Sample response:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fxyu8srxgcdwdms83ns0.png) The response will have a verificationId that will be passed a parameter in the validate OTP API to validate the request. ## How to validate an OTP? 1. Make a get request to /verification/v2/verification/validateOtp. 2. Add ‘customerId’ field. This is a unique Id for every customer. 3. Add ‘code’ field. This is the OTP code sent to the end user. 4. Add ‘verificationId’ field. This is the verification id generated in the send OTP API. 5. Add ‘authToken’ generated using the token API and pass that as a header in send otp. ** Sample response:** ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/phaghz5k7y9vwefjb7q6.png) You can see our detailed [SMS verification API documentation][4] page for more details. [1]: https://www.messagecentral.com/product/verify-now/overview [2]: https://console.messagecentral.com/signUp [3]: https://www.messagecentral.com/product/verify-now/free-sms-verification [4]: https://www.messagecentral.com/product/verify-now/api
alinaj
1,728,357
6 Amazing Headers for Freelancer/Agency Website
Design link : https://www.figma.com/community/file/1328662399939962106 Design link :...
25,911
2024-01-15T16:58:44
https://www.figma.com/community/file/1328662399939962106
webdev, beginners, design, react
![web template design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hc0sbe9s7e4jl557sate.png) 1. Design link : https://www.figma.com/community/file/1328662399939962106 --- ![web template design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zyyx010ndkjzb9x6lbze.png) 2. Design link : https://www.figma.com/community/file/1328662399939962106 --- ![web template design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/n47jokivm8oijgk51zfe.png) 3. Design link : https://www.figma.com/community/file/1328662399939962106 --- ![web template design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/6xp8ezxb32ad6zmli115.png) 4. Design link : https://www.figma.com/community/file/1328662399939962106 --- ![web template design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xlcegsqfaccijqorjtmb.png) 5. Design link : https://www.figma.com/community/file/1328662399939962106 --- ![web template design](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/outdl83lddaxqen891uo.png) 6. Design link : https://www.figma.com/community/file/1328662399939962106
codingcss
1,798,357
Tuning system performance
Sistem Penyetelan Administrator sistem dapat mengoptimalkan kinerja sistem dengan menyesuaikan...
0
2024-03-23T04:08:11
https://dev.to/mhmmdrafii/tuning-system-performance-2o31
ramadhanbersamaredhat
**Sistem Penyetelan** Administrator sistem dapat mengoptimalkan kinerja sistem dengan menyesuaikan berbagai pengaturan perangkat berdasarkan berbagai beban kerja kasus penggunaan. Daemon tunedmenerapkan penyesuaian penyetelan baik secara statis maupun dinamis, menggunakan profil penyetelan yang mencerminkan persyaratan beban kerja tertentu. **Mengonfigurasi Penyetelan Statis** Daemon tuned menerapkan pengaturan sistem saat layanan dimulai atau saat memilih profil penyetelan baru. Penyetelan statis mengonfigurasi kernelparameter yang telah ditentukan sebelumnya di profil yang penyetelannya berlaku saat runtime. Dengan penyetelan statis, parameter kernel ditetapkan untuk ekspektasi kinerja secara keseluruhan, dan tidak disesuaikan seiring perubahan tingkat aktivitas. **Mengonfigurasi Penyetelan Dinamis** Dengan penyetelan dinamis, tuneddaemon memantau aktivitas sistem dan menyesuaikan pengaturan bergantung pada perubahan perilaku waktu proses. Penyetelan dinamis terus-menerus menyesuaikan penyetelan agar sesuai dengan beban kerja saat ini, dimulai dengan pengaturan awal yang dinyatakan dalam profil penyetelan yang dipilih. **Menginstal dan mengaktifkan setelan** Instalasi minimal Red Hat Enterprise Linux 8 menyertakan dan mengaktifkan paket yang disetel secara default. Untuk menginstal dan mengaktifkan paket secara manual: ``` [root@host ~]$ yum install tuned [root@host ~]$ systemctl enable --now tuned Created symlink /etc/systemd/system/multi-user.target.wants/tuned.service → /usr/lib/systemd/system/tuned.service. ``` **Memilih Profil Penyetelan** Aplikasi Tuned menyediakan profil yang dibagi ke dalam kategori berikut: - Profil hemat daya - Profil yang meningkatkan kinerja **Profil peningkatan kinerja mencakup profil yang berfokus pada aspek-aspek berikut:** - Latensi rendah untuk penyimpanan dan jaringan - Throughput tinggi untuk penyimpanan dan jaringan - Kinerja mesin virtual - Kinerja host virtualisasi **Mengelola profil dari baris perintah** Administrator sistem mengidentifikasi profil penyetelan yang sedang aktif dengan **tuned-adm active** ``` [root@host ~]# tuned-adm active Current active profile: virtual-guest ``` Perintah **tuned-adm list** mencantumkan semua profil penyetelan yang tersedia, termasuk profil bawaan dan profil penyetelan khusus yang dibuat oleh administrator sistem. ``` [root@host ~]# tuned-adm list Available profiles: - balanced - desktop - latency-performance - network-latency - network-throughput - powersave - sap - throughput-performance - virtual-guest - virtual-host Current active profile: virtual-guest ``` Gunakan **profile tuned-adm** _profilename_ untuk mengalihkan profil aktif ke profil lain yang lebih cocok dengan persyaratan penyetelan sistem saat ini. ``` [root@host ~]$ tuned-adm profile throughput-performance [root@host ~]$ tuned-adm active Current active profile: throughput-performance ``` Perintah **tuned-adm** dapat merekomendasikan profil penyetelan untuk sistem. Mekanisme ini digunakan untuk menentukan profil default suatu sistem setelah instalasi ``` [root@host ~]$ tuned-adm recommend virtual-guest ``` Untuk mengembalikan perubahan pengaturan yang dibuat oleh profil saat ini, beralihlah ke profil lain atau nonaktifkan daemon yang disetel. Matikan tuned aktivitas penyetelan dengan **tuned-adm off** ``` [root@host ~]$ tuned-adm off [root@host ~]$ tuned-adm active No current active profile. ```
mhmmdrafii
1,728,467
How to Build a Stepper Component in React 🤔 ?
What is Stepper Component in React ? The Stepper component enables the user to create a...
0
2024-01-15T19:46:03
https://dev.to/nonish/how-to-build-a-stepper-component-in-react--3495
react, scss, stepper, machinecoding
## What is Stepper Component in React ? - The Stepper component enables the user to create a sequence of logical steps that visualises progress. - It could also be used for navigation purposes. #### Key Features - **Display Modes**—The various display modes allow you to configure the step layout and type. - **Linear Mode**—The linear mode requires the user to complete the previous step before proceeding to the next one. - **Orientation**—You can switch between horizontal and vertical orientation. - **Validation**—You can set the validation logic for each step. - **Custom Rendering**—The Stepper allows you to customise the rendering of each step. - **Keyboard Navigation**—The Stepper supports various keyboard shortcuts. - **Accessibility**—The Stepper component is accessible by screen readers and provides full WAI-ARIA support. ## Prerequisite: - [NPM](https://www.npmjs.com/) - [React JS](https://react.dev/learn) - [Scss](https://sass-lang.com/) ## Project Setup First, you need to create a React project. In the terminal of your text editor and at your chosen directory, type in this command: ``` npx create-react-app stepper-app ``` Then, Go to the directory ‘src’ execute the command prompt there and run the command ``` npm i scss ``` ## Build and Add Functionality to the Stepper Layout In the below code, I create the basic stepper look: a rounded circle with a grey background, and a short line right of it. They will both be repeated, depending on the number of steps we want to show. For mapping the array we use javascript inbuilt Array() function then use fill() to insert react fragment into it and render the basic div tag. And same create two button for back froth functionality and styled with some basic css. And to hide the last div element we use isHide() which return boolean value if mapping index is equal to the no of steps.Then we add onClick handle on both buttons and add functionality with condition within it to back and forth the stepper value. Also added some basic and conditional style to show the stepper effect respectively. And at last we pass the props in out Stepper component to get the current value of stepper which will be match with mapping index and show the correct stepper position on srceen. ###Here's the final code for the App component: ``` //App.jsx import { Fragment, memo, useState } from "react"; import "./stepper.scss" const App = () => { const [currentStep, setCurrentStep] = useState(0) const NUMBER_OF_STEPS = 5; return ( <div> <h1>Stepper : {currentStep}</h1> <Stepper currentStep={currentStep} numberOfSteps={NUMBER_OF_STEPS} /> <div className="btn-wrapper"> <button onClick={() => setCurrentStep(pre => pre === 0 ? pre : pre - 1)}> Previous </button> <button onClick={() => setCurrentStep(pre => pre === NUMBER_OF_STEPS ? pre : pre + 1)}> Next </button> </div> </div> ) } export default App; // eslint-disable-next-line react/prop-types const Stepper = memo(function Stepper({ currentStep, numberOfSteps }) { const isHide = (index) => index === numberOfSteps - 1; return ( <div className="stepper-wrapper"> {Array(numberOfSteps).fill(<></>).map((_, index) => { return (<Fragment key={index}> <div className={`stepper-circle ${currentStep >= index ? "isActive" : ""}`}></div> {!isHide(index) && <div className={`stepper-line ${currentStep > index ? "isActive" : ""}`}></div>} </Fragment>) })} </div> ) }) //Stepper.jsx .stepper-wrapper{ display: flex; justify-content: center; align-items: center; } .stepper-circle{ width: 60px; height: 60px; border-radius: 50%; background-color: rgba(182, 177, 177, 0.5); } .stepper-line{ width: 40px; height: 5px; background-color: rgba(182, 177, 177, 0.5); } .isActive{ background-color: aqua; } .btn-wrapper{ display: flex; justify-content: space-between; margin-top: 50px; } .btn-wrapper > button{ width: 40%; border: none; border-radius: 5px; padding: 10px 20px; font-size: 16px; font-weight: 700; } ``` ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/v5wenluwfup4em3ai3nh.png) And that's it! You've successfully built a stepper component in a React project. The working code demo is here , kindly check out [codesandbox](https://codesandbox.io/p/sandbox/stepper-component-5s9vyw). Please let me know your thoughts and suggestions about this article. Thanks for reading !!! Happy Coding.
nonish
1,728,512
BITCOIN AND CRYPTO RECOVERY COMPANY
I’m highly recommending SpyWeb Cyber Security Service for your Bitcoin, crypto, and digital assets...
0
2024-01-15T20:45:18
https://dev.to/garci/bitcoin-and-crypto-recovery-company-4odi
I’m highly recommending SpyWeb Cyber Security Service for your Bitcoin, crypto, and digital assets recovery. This company was able to help me recover all my crypto that was stolen from me in less than 72 hours. It is very important to carry out enough research about the crypto market before investing in it, I was one of the victims of a crypto investment scam and if I had done enough research, I would not have fallen into their trap. Fortunately for me, I was able to come across Spyweb Cyber Security Service after searching the Internet and seeing many wonderful reviews about their service and success in recovering crypto. I’m grateful for their support and their professionalism in helping me get back my stolen crypto. I’ll simply put this here because it is really good to see some people have good intentions to help victims of these crypto scams, they are very reliable and trustworthy. I highly recommend their services for all your crypto and digital assets recovery. E-MAIL : Spyweb@Cyberdude.com WEBSITE : (https://spyweb3.wixsite.com/spywebcyber) WHATS-APP : (+1 720 625 0393)
garci
1,728,566
Curso De Front-End Gratuito E Online Da Alura
Explore e aprimore suas habilidades em Front-end através de um projeto prático. Em cinco aulas...
0
2024-01-27T02:05:24
https://guiadeti.com.br/curso-front-end-gratuito-online-alura/
cursogratuito, cursosgratuitos, desenvolvimento, frontend
--- title: Curso De Front-End Gratuito E Online Da Alura published: true date: 2024-01-15 18:34:15 UTC tags: CursoGratuito,cursosgratuitos,desenvolvimento,frontend canonical_url: https://guiadeti.com.br/curso-front-end-gratuito-online-alura/ --- Explore e aprimore suas habilidades em Front-end através de um projeto prático. Em cinco aulas gratuitas oferecidas pela Alura, você terá a oportunidade de desenvolver habilidades avançadas em HTML e CSS e explorar frameworks modernos. Este curso gratuito vai orientar o aluno na criação do Front-end de uma página de navegação inspirada no Spotify. Os participantes também receberam certificado da Alura após a conclusão. O evento dá a oportunidade de envolvimento com uma comunidade exclusiva, participar de grupos de estudos e estabelecer conexões com milhares de desenvolvedores Front-end. O aluno terá apoio também da assistência da Luri, a Inteligência Artificial da Alura, para esclarecer dúvidas, receber sugestões e compreender cada linha do código construído. Este é o caminho para evoluir e se destacar no universo do Front-end! ## Imersão Front-End Aprimore seus conhecimentos em Front-end através de um projeto prático. A Alura oferece cinco aulas gratuitas para explorar HTML e CSS, conhecer frameworks modernos e construir o Front-end de uma página de navegação inspirada no Spotify. ![](https://guiadeti.com.br/wp-content/uploads/2024/01/image-19-1024x461.png) _Página da Imersão Front-End da Alura_ ### Desenvolva Habilidades Nesse evento gratuito da Alura, o participante vai criar um projeto Front-end na prática, com aprofundamento nas principais tecnologias da área, como HTML, CSS, JavaScript, React e Angular. Ao concluir o projeto, um certificado Alura será entregue. ### Comunidade Exclusiva e Suporte Faça parte de uma comunidade exclusiva, participe de grupos de estudos e conecte-se com milhares de desenvolvedores Front-end. Conte com a Luri, a Inteligência Artificial da Alura, para esclarecer dúvidas, pedir sugestões e compreender cada linha do seu código. ### O Que Você Vai Aprender? Mergulhe no Front-end desde o zero e desenvolva habilidades essenciais para atuar na área. Aprenda na prática criando uma página de navegação web inspirada no Spotify. Veja a ementa: - HTML e CSS: Consolide e amplie conhecimentos essenciais em HTML, CSS e JavaScript; - Desafio prático: Crie, na prática, o Front-end da página de navegação de um streaming de música; - Frameworks Front-end: Comece a programar com a adoção dos principais frameworks do mercado: React e Angular; - Layouts avançados: Mergulhe em conceitos mais avançados, como responsividade, posicionamento animações e design eficiente; - Inteligência Artificial: Tenha acesso a conteúdos extras sobre ChatGPT e IA aplicada ao Front-end; - Próximos passos: Participe de lives exclusivas e descubra o caminho a seguir para se tornar profissional da área. ### Instrutores Aprenda com profissionais que dominam o conteúdo: #### Guilherme Lima Instrutor de Tecnologia Educacional na Alura e na USP, especializado em linguagens de alto nível, com ênfase em Python, JavaScript e Go para o backend. Possui graduação em Sistemas de Informação e está cursando pós-graduação em Dados na Fiap. #### Fernanda Degolin Atuando como Desenvolvedora Front-end na Globo, contribui para aprimorar a experiência do app Globoplay em TVs. Durante a pandemia, deu um salto audacioso para o universo da tecnologia, transformando sua paixão por criação em uma nova e empolgante jornada. #### Mayara Cardoso Trabalhando como Desenvolvedora Front-end no Itaú, concluiu a graduação em Análise e Desenvolvimento de Sistemas pelo IFSP, além de ter se especializado em Desenvolvimento Front-end pela XP Educação. Além disso, atua como Co-Organizadora na comunidade AngularSP. ### Para Quem é Esta Imersão? Esta imersão destina-se a quem possui conhecimento básico em HTML, CSS e JavaScript e deseja: - Aprimorar conhecimentos; - Desenvolver habilidades avançadas conhecendo frameworks; - Aprender na prática: Crie uma página web responsiva que se adapta a diferentes dispositivos; - Avançar na carreira: Impulsione sua carreira Dev com a maior escola de tecnologia do país. ### Cronograma Do Curso - Aula 01 – 22/01: Comece por uma revisão geral sobre HTML, CSS e JavaScript, criando uma página na prática. - Aula 02 – 23/01: Avance em CSS e mergulhe em posicionamento, layouts responsivos, flexbox, grid e mais. - Aula 03 – 24/01: Explore responsividade no CSS, com foco em telas de desktop e mobile, além de Media Queries. - Aula 04 – 25/01: Chegou a hora de explorar o uso prático do DevTools para manipulação do DOM. - Aula 05 – 26/01: Desbrave os frameworks Angular e React para criar projetos, conceitos de JSX e componentes de card. Enfrente desafios diários para aprender na prática, contando com a didática da Alura. Ao final da imersão, você terá criado um projeto Front-end do início ao fim. A imersão acontecerá nos dias 22 a 26 de Janeiro. <aside> <div>Você pode gostar</div> <div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Curso-de-Inteligencia-Artificial-Trybe-280x210.png" alt="Curso de Inteligência Artificial Trybe" title="Curso de Inteligência Artificial Trybe"></span> </div> <span>Curso De Inteligência Artificial Na Prática Gratuito Da Trybe</span> <a href="https://guiadeti.com.br/curso-inteligencia-artificial-na-pratica-gratuito/" title="Curso De Inteligência Artificial Na Prática Gratuito Da Trybe"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Evento-De-Programacao-Em-Pythonx-280x210.png" alt="Evento De Programação Em Python" title="Evento De Programação Em Python"></span> </div> <span>Evento De Programação Em Python Gratuito Da Rocketseat</span> <a href="https://guiadeti.com.br/evento-programacao-em-python-gratuito-rocketseat/" title="Evento De Programação Em Python Gratuito Da Rocketseat"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Curso-de-SAP-Gratuito-280x210.png" alt="Curso de SAP Gratuito" title="Curso de SAP Gratuito"></span> </div> <span>Webinar De SAP Para Iniciantes Gratuito Da Ka Solution</span> <a href="https://guiadeti.com.br/webinar-sap-para-iniciantes-gratuito-ka-solution/" title="Webinar De SAP Para Iniciantes Gratuito Da Ka Solution"></a> </div> </div> <div> <div> <div> <span><img decoding="async" width="280" height="210" src="https://guiadeti.com.br/wp-content/uploads/2024/01/Curso-De-Excel-Gratuito-1-280x210.png" alt="Curso De Excel Gratuito" title="Curso De Excel Gratuito"></span> </div> <span>Curso De Excel Gratuito Da Hashtag Treinamentos</span> <a href="https://guiadeti.com.br/curso-de-excel-gratuito-hashtag-treinamentos/" title="Curso De Excel Gratuito Da Hashtag Treinamentos"></a> </div> </div> </div> </aside> ## Desenvolvimento Front-End O desenvolvimento Front-End teve início nas páginas iniciais da web, onde o HTML e o CSS desempenharam papéis centrais ao proporcionar estrutura básica e estilização. Esse período inicial marcou os fundamentos da criação visual na web. ### Dinamismo com JavaScript A introdução do JavaScript representou uma revolução no Front-End, possibilitando interações dinâmicas nas páginas web. Isso abriu caminho para interfaces mais complexas e experiências do usuário mais envolventes. ### Padrões do W3C O papel crucial do W3C na definição de padrões e normas estabeleceu a base para a web semântica, priorizando a acessibilidade e interoperabilidade. Esses princípios moldaram o desenvolvimento Front-End e garantiram uma evolução contínua. ### Adaptação à Era Móvel Com a ascensão dos dispositivos móveis, o Front-End evoluiu para criar designs responsivos, adaptando-se a diversas plataformas. Essa adaptação transformou a interface e impulsionou a busca por otimização de desempenho e aprimoramento da experiência do usuário. ## Alura Idealizada por Caio Gomes e Paulo Silveira e fundada em 2011, a plataforma Alura surgiu com a visão de proporcionar cursos online de alta qualidade, alinhados às demandas do mercado de trabalho em constante evolução. ### Foco na Qualidade e Praticidade Os fundadores da Alura começaram com o objetivo de oferecer cursos online que fossem tanto práticos quanto eficazes. O foco inicial foi em disciplinas relacionadas à tecnologia, como programação e design, proporcionando aos alunos uma experiência de aprendizado prática e alinhada às necessidades do setor. ### Aprendizado Interativo e Projetos Práticos A Alura rapidamente se destacou por propor um modelo de aprendizagem interativo e projetos práticos. Isso permitiu que os estudantes aplicassem seus conhecimentos em cenários do mundo real, preparando-os para desafios reais no mercado de trabalho. ### Diversificação e Atualização Constante Ao longo dos anos, a Alura expandiu significativamente seu catálogo de cursos, cobrindo uma ampla gama de tópicos, desde habilidades técnicas até habilidades comportamentais. A plataforma está sempre atenta às tendências do mercado, garantindo que os cursos estejam atualizados e relevantes para as demandas em constante evolução. ## Inscreva-se no curso da Alura e aprofunde seus conhecimentos em tecnologia de maneira prática! As [inscrições para a Imersão FronT-End](https://www.alura.com.br/imersao-front-end) devem ser realizadas no site da Alura. ## Compartilhe o conhecimento que transforma! Gostou do conteúdo sobre o curso gratuito de Front-End? Então compartilhe com a galera! O post [Curso De Front-End Gratuito E Online Da Alura](https://guiadeti.com.br/curso-front-end-gratuito-online-alura/) apareceu primeiro em [Guia de TI](https://guiadeti.com.br).
guiadeti
1,728,624
Now is the opportunity to be part of an exceptional technology community
Now is the opportunity to be part of an exceptional technology community Global community! If you...
0
2024-01-15T23:50:02
https://dev.to/basel5001/now-is-the-opportunity-to-be-part-of-an-exceptional-technology-community-d8g
Now is the opportunity to be part of an exceptional technology community Global community! If you want to explore and be part of this technological community, join the hashtag#awscommunitybuilders AWS Community Builder program and delve into the world of technology, innovation, and individual cooperation. In addition to the benefits offered to this community, including: - Discounts on AWS certificates - Great swag - Chance to attend AWS Re:invent. - Access to learning platforms in addition to vouchers for the AWS account. - Obtaining vouchers to participate in the AWS certification exam. And other opportunities for cooperation and participation between community members and leaders. How to apply: Application Link: https://lnkd.in/eDZphi33 " Deadline to fill the application: 28th January 2024!." Note that this is the only opportunity to apply during this year! Good luck!
basel5001
1,728,646
How to Create and Use a Custom ResultMatcher for Date Testing with MockMvc
In this tutorial, you will learn how to create a custom ResultMatcher for MockMvc, to suit your...
0
2024-01-17T12:00:00
https://springmasteryhub.com/2024/01/17/how-to-create-and-use-a-custom-resultmatcher-for-date-testing-with-mockmvc/
tutorial, java, testing, spring
In this tutorial, you will learn how to create a custom ResultMatcher for MockMvc, to suit your project needs. Imagine you are testing an API and the response contains a date. And you want to check if the response date is valid. So you have a test assertion that checks for an exact date and time. But for some reason, this result can vary over time. For some reasons: - Maybe your machine has a different timezone from what’s used in the server. - Maybe some parts of your code can use different time zones. - Maybe your code has a business rule that has to calculate a date and time that can vary in a range. - Maybe you are using LocalDateTime.now() and you cannot mock using the `Clock`. - Or any specific other date manipulation or configuration that your project has. When you run your tests you get errors like this, you check for the `LocalDateTime.now()` but the result is a few seconds later. ```java Expected: is "2024-01-11T12:37:03.1" but: was "2024-01-11T12:38:21.1" ``` Or you have a time zone issue that makes everything 3 hours behind: ```java Expected: is "2023-03-07T20:18:57" but: was "2023-03-07T17:18:57" ``` One solution for that (there are more solutions, maybe some better ones) is to create a custom ResultMatcher. That can check if the result date from an API response is in a range. ## How to use a custom result matcher to fix this issue? If we are checking the MockMvc the code should look like this: ```java mvc.perform(get("/product/{}",productId).contentType(MediaType.APPLICATION_JSON_UTF8)) .andExpect(status().isOk()) .andExpect(jsonPath("$.date", is("2023-03-07T17:18:57"))); ``` As we said, the problem with this implementation is that the assertion is by the exact date and time. And our time can vary a little bit, so it’ll break our test. Writing a custom result matcher will enable you to do custom checks as you need. You can write a result matcher that checks if the response date is in a range that you specify. ### Example of a custom result matcher for date range validation: ```java public static ResultMatcher isDateTimeInRange(String path, String startDateStr, String endDateStr) { LocalDateTime startDate = LocalDateTime.parse(startDateStr, formatter); LocalDateTime endDate = LocalDateTime.parse(endDateStr, formatter); return mvcResult -> { String contentAsString = mvcResult.getResponse().getContentAsString(); LocalDateTime checkoutDate = LocalDateTime.parse(JsonPath.read(contentAsString, path), formatter); if (!(checkoutDate.isAfter(startDate) && checkoutDate.isBefore(endDate))) { throw new AssertionError(String.format("Date: %s is not between %s and %s", checkoutDate, startDate, endDate)); } }; } ``` This code allows us to send a start and end range and the date format for the date we’re working. This will enable us to write our logic to check if the date is in range! ### How to use your custom result matcher in your test ```java mvc.perform(get("/product/{}",productId).contentType(MediaType.APPLICATION_JSON_UTF8)) .andExpect(status().isOk()) .andExpect(isDateTimeInRange("$.date", "2019-03-07T13:18:57", "2019-03-07T21:18:57", "yyyy-MM-dd'T'HH:mm:ss")); ``` This will check if the response date is in the range requested! By the way, this is only one example of how to create a custom result matcher. Explore the possibilities and create some that suit your needs. Happy coding! Follow me on social media, and here on dev.to! [Willian Moya (@WillianFMoya) / X (twitter.com)](https://twitter.com/WillianFMoya) [Willian Ferreira Moya | LinkedIn](https://www.linkedin.com/in/willianmoya/)
tiuwill
1,728,782
Choosing the Right Coverall Color: More Than Aesthetics, It's About Safety
Introduction: When it comes to personal protective equipment (PPE), the color of coveralls may seem...
0
2024-01-16T05:05:19
https://dev.to/sim_chanda/choosing-the-right-coverall-color-more-than-aesthetics-its-about-safety-ej9
coverallsmarket, marketsteategy, markettrends, marketinsights
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/5gmyjjp59fpzns5l1u1a.jpg) Introduction: When it comes to personal protective equipment (PPE), the color of coveralls may seem such as a minor consideration compared to factors such as material and fit. However, the choice of coverall color is far from arbitrary. It plays a crucial role in ensuring safety in the workplace. In this blog, we'll explore why selecting the right coverall color is more than just a matter of aesthetics—it's a strategic decision with implications for visibility, identification, and overall safety. **Request for sample pdf:** https://www.nextmsc.com/coverall-market/request-sample The Significance of Coverall Colors in Safety: 1. Visibility and Identification: One of the primary reasons for selecting specific coverall colors is to enhance visibility and easy identification of workers in various environments. In industries where employees are working in close proximity or in low-light conditions, the right color can significantly improve visibility, reducing the risk of accidents and ensuring quick identification in emergencies. 2. Industry Standards and Regulations: Many industries have specific standards and regulations regarding the colors of PPE, including coveralls. These standards are established to ensure uniformity and compliance with safety protocols. Adhering to industry-specific color guidelines helps create a standardized approach to safety, making it easier for workers and management to identify the roles and responsibilities of individuals based on their coverall colors. 3. Enhanced Safety Communication: Coverall colors serve as a form of non-verbal communication on worksites. For example, different colors may signify distinct roles or responsibilities. This color-coded system enhances safety communication, allowing workers to quickly understand the roles of their colleagues and respond appropriately in various situations. 4. Environmental Considerations: The environment in which workers operate is a crucial factor in determining the most suitable coverall color. In outdoor settings, high-visibility colors may be preferred to ensure that workers are easily seen by others, especially in construction zones, roadwork, or areas with moving vehicles. In contrast, indoor environments may have different requirements based on lighting conditions and the nature of the work. Understanding Coverall Color Options: 1. High-Visibility Colors: High-visibility colors such as fluorescent yellow, orange, and lime green are widely used in industries where workers need to be easily seen, especially in low-light conditions. These colors enhance visibility during daylight hours and can be particularly effective when combined with reflective materials for nighttime work. 2. Blue Coveralls: Blue coveralls are commonly associated with various industries, including manufacturing and maintenance. While blue may not be as high-visibility as fluorescent colors, it is often chosen for its professional appearance and practicality in environments where dirt and stains are prevalent. 3. Red Coveralls: Red coveralls are often used in roles where workers need to stand out against certain backgrounds. They are also employed in emergency response situations where quick identification is crucial. Firefighters, for example, may wear red coveralls for increased visibility and recognition. 4. White Coveralls: White coveralls are commonly used in cleanroom environments, laboratories, and medical settings. The color white signifies cleanliness and is preferred in industries where maintaining a sterile environment is essential. However, white may not be suitable for industries with a high likelihood of dirt or stains. 5. Green and Brown Coveralls: Green and brown coveralls are often chosen for workers in outdoor settings, such as forestry, agriculture, or landscaping. These colors help workers blend in with natural surroundings while providing a level of visibility for safety. 6. Gray and Black Coveralls: Gray and black coveralls are sometimes selected for industries where maintaining a professional appearance is important. However, these colors may not be suitable for environments where high visibility is a priority, and additional safety measures may be required. Industry-Specific Coverall Color Codes: 1. Construction and Roadwork: - High-Visibility Orange or Yellow: These colors enhance visibility in construction zones and roadwork, reducing the risk of accidents involving moving vehicles. 2. Emergency Services: - Red or High-Visibility Colors: Firefighters often wear red coveralls for quick identification in emergency situations. Other emergency responders may use high-visibility colors. 3. Healthcare and Cleanrooms: - White or Light Blue: White is commonly used in cleanroom environments, while light blue is often seen in healthcare settings for a clean and professional appearance. 4. Manufacturing and Maintenance: - Blue, Gray, or High-Visibility Colors: Blue and gray are practical choices for manufacturing environments, balancing professionalism and functionality. High-visibility colors may be necessary in areas with moving machinery. 5. Agriculture and Forestry: - Green, Brown, or High-Visibility Colors: Workers in agriculture and forestry may opt for colors that blend with natural surroundings or high-visibility colors for safety. 6. Oil and Gas Industry: - High-Visibility Colors: Given the potential hazards in the oil and gas industry, high-visibility colors are often recommended for worker safety. Best Practices for Choosing Coverall Colors: 1. Assess Work Environment: Consider the specific conditions of the work environment, including lighting, potential hazards, and the need for visibility. High-visibility colors are generally recommended for outdoor work and areas with moving equipment. 2. Understand Industry Standards: Familiarize yourself with industry-specific standards and regulations regarding coverall colors. Compliance with these standards is essential for ensuring the safety of workers and avoiding potential regulatory issues. 3. Consider Worker Roles: Implement a color-coded system based on worker roles and responsibilities. This approach enhances safety communication, making it easier for individuals to identify the functions of their colleagues at a glance. 4. Balance Aesthetics with Functionality: While aesthetics are important, prioritize functionality and safety when choosing coverall colors. Balance the desire for a professional appearance with the practicality of high-visibility colors in environments where safety is a top priority. 5. Consult with Workers: Involve workers in the decision-making process regarding coverall colors. Their input can provide valuable insights into the practical considerations and preferences that will impact their comfort and safety. 6. Use Reflective Materials: In addition to choosing the right color, incorporate reflective materials into coveralls, especially for roles that involve working near roadways or in low-light conditions. Reflective elements enhance visibility and safety. 7. Consider Stain Resistance: In environments where dirt, stains, or contaminants are prevalent, choose colors that are resistant to staining or opt for darker hues that can mask stains more effectively. 8. Regularly Assess and Update: Periodically assess the effectiveness of chosen coverall colors in meeting safety and visibility requirements. If there are changes in work conditions or regulations, be prepared to update coverall colors accordingly. **Inquire before buying:** https://www.nextmsc.com/coverall-market/inquire-before-buying ** Conclusion:** Choosing the right coverall color is a decision that extends far beyond mere aesthetics. It is a strategic choice with implications for safety, visibility, and overall functionality in the workplace. By understanding the specific needs of the work environment, industry standards, and the roles of workers, organizations can make informed decisions that prioritize the well-being and safety of their workforce. Whether it's the high-visibility colors for construction sites, the professional appearance of blue or gray coveralls in manufacturing, or the clean and sterile look of white in healthcare, each color serves a purpose. Coveralls, when selected thoughtfully, contribute to a safer and more organized work environment, where workers can carry out their responsibilities with confidence and visibility. In the field of safety, the color of coveralls is a critical element that should not be overlooked, as it plays a key role in creating a secure and efficient workplace.
sim_chanda
1,728,792
2024 AWS Community Builders Application Form is now live
Hi All, I am happy to share you all, I hope it will be good news for the aws cloud community...
0
2024-01-16T05:38:07
https://dev.to/santhakumar_munuswamy/2024-aws-community-builders-application-form-is-now-live-53d7
aws, awscommunitybuilders, ai, machinelearning
Hi All, I am happy to share you all, I hope it will be good news for the aws cloud community peoples and it is a time to apply for the aws community builders program. What is AWS Community Builders? The AWS Community Builders program offers technical resources, education, and networking opportunities to AWS technical enthusiasts and emerging thought leaders who are passionate about sharing knowledge and connecting with the technical community. You can find the more details here. https://lnkd.in/gMt96NuQ Are you interest to join this program? If you want to have passionate about the AWS cloud product and services such as sharing technical articles, forums, tips & trick etc. you can apply the AWS Community Builder program. https://lnkd.in/gdEAGa8F The application is open until January 28th, 2024
santhakumar_munuswamy
1,728,835
Vue Basis: Navigating Through Your App with Vue Router
Check this post in my web notes Unlocking the full potential of your Vue.js projects involves...
0
2024-01-16T06:56:53
https://webcraft-notes.com/blog/vue-basis-navigating-through-your-app-with
vue, webdev, beginners, tutorial
![Navigating Through Your App with Vue Router](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hyjvmvtchmkw7e7up12u.png) > Check [this post](https://webcraft-notes.com/blog/vue-basis-navigating-through-your-app-with) in [my web notes](https://webcraft-notes.com/) Unlocking the full potential of your Vue.js projects involves seamlessly navigating between various pages and managing diverse sets of information. The daunting task of consolidating all data onto a single page is not only incredibly challenging but often downright impossible. This challenge is familiar to both designers and developers working on virtually every project. Enter Vue Router – the indispensable tool that transforms this challenge into a manageable solution. Vue Router, an integral part of Vue.js, empowers developers and designers to navigate through different pages, presenting, updating, storing, and interacting with data in a structured and efficient manner. Whether you're showcasing content, implementing dynamic updates, or ensuring data persistence, Vue Router provides the essential functionality to enhance user experiences and streamline the development process. I will create a new project for testing purposes by using the instructions from my [previous post](https://webcraft-notes.com/blog/vuejs-state-management-guide-pinia-in-practice). If you do not have a project you can do the same. On the question "Add Vue Router for Single Page Application development?" we will answer "Yes", so that our project will already be with Vue Router. After we create a project and remove unneeded components, we can dive into coding. After Vue Router was installed we needed to check or add if necessary few updates. First of all, we will create a router folder and index.js file inside with our router configures. Inside the file, we will import "createRouter" (directly create our router with some settings that we need to add: routes, history, hooks...) and "createWebHistory" (set settings for history mode) functions. To use "webHistory" mode in Vue Router, you need to configure your server to always return the same HTML page for different URLs. This is known as a "single-page application" (SPA) server configuration. When the server receives a request for a specific URL, it should always return the same HTML page, and then let Vue Router handle the routing on the client-side. The major setting for the router is a routes array that accepts the object description of every route, inside that description we need to set the path, name, and component that will be rendered at that path. Here is the simple router configuration example: ``` import { createRouter, createWebHistory } from 'vue-router' import HomeView from '../views/HomeView.vue' const router = createRouter({ history: createWebHistory('/'), routes: [ { path: '/', name: 'home', component: HomeView }, { path: '/about', name: 'about', component: () => import('../views/AboutView.vue') } ] }) export default router ``` Next, we will import our router file into main.js file and add it to our app before mounting, just like this: ``` import { createApp } from 'vue' import App from './App.vue' import router from './router' const app = createApp(App) app.use(router) app.mount('#app') ``` Now in our "App.vue" file we import and add into the template "RouterView", it is our container or other words place where we will render and update pages. ``` <template> <RouterView /> </template> <script> import { RouterView } from 'vue-router'; export default { name: 'App', } </script> ``` We will create two pages for now "Home" and "About" as in router/index.js file was mentioned to check if the router works correctly. After all these manipulations we can use "npm run dev" command and check our pages. Our app will be launched locally on "localhost:port" address and we will see our "Home" page. If we add "/about" into url then the page will be changed and we will see our "About" page. Awesome, we added and configured Vue Router in our app, now let's dive deeper into Vue Router's capabilities together! Experiment with advanced features like route guards, lazy loading, and navigation hooks to expand your expertise. In our upcoming posts, we'll delve into these functionalities, exploring them in detail. Additionally, leverage Vue Router's official documentation, community forums, and tutorials to broaden your understanding. This hands-on approach will prepare you for tackling complex scenarios in your Vue.js projects. Stay tuned for more insights and happy coding! > [Continue studying...](https://webcraft-notes.com/)
webcraft-notes
1,728,915
assertTrue() in Java: A Complete Tutorial
Assertions are an important part of the automation testing process, ensuring the software functions...
0
2024-01-16T07:53:47
https://www.lambdatest.com/blog/asserttrue-in-java/
asserttrue, javascript, automationtesting, softwaretesting
Assertions are an important part of the automation testing process, ensuring the software functions as anticipated. If it is not working as desired, the tests have to be marked as failure and need to be halted to make the necessary investigation for the failure. An assertion statement helps to test the assumption about the test. When performing [test automation](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage), assertions help us automatically verify the tests’ output. In Selenium automation testing, we come across multiple scenarios where it is necessary to decide on the subsequent execution of the tests. This is important in cases where the test result of the previous [test execution](https://www.lambdatest.com/learning-hub/test-execution?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub) is a failure. If the tests depend on each other, it is recommended to halt the execution at the exact failure step by using *Hard Assertion*s so the other dependent tests are skipped to save time. Consider an example of an [end-to-end testing](https://www.lambdatest.com/learning-hub/end-to-end-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub) journey of an eCommerce application where the *product checkout from cart* tests depends on the *add product to cart* tests. If the *add product to cart* tests fail, subsequent tests should not be executed as they will fail. The test execution halts when the condition (part of the assertion) is unmet. Hence, when [Selenium automation testing with TestNG](https://www.lambdatest.com/selenium-automation-testing-with-testng?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) is performed, assertions can help us understand the expected behavior of the application and allow us to check the quality of the software under test. JUnit 5 is one of the popular testing frameworks used by many test automation projects to execute and run automation tests and perform assertions. It is a very good library with different functions to perform [automation testing](https://www.lambdatest.com/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) seamlessly. In this [JUnit tutorial](https://www.lambdatest.com/learning-hub/junit-tutorial?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub), we will demonstrate performing assertions and check specific conditions. using the *assertTrue(*) in Java. ***Get your CSS validated by our*** [***CSS Validator***](https://www.lambdatest.com/free-online-tools/css-validator?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=free_online_tools) ***and eliminate syntax errors and other issues that could affect your website’s performance. Ensure your CSS code is error-free today.***  ### What are Assertions in test automation? Assertions are the core concept of [functional testing](https://www.lambdatest.com/learning-hub/functional-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub). In automated testing, it forms an integral part of the test and is used to derive the outcome of the test execution. The test passes if the test result confirms the assertion; otherwise, it will fail. It brings many benefits to test automation, like providing accurate test results, speeding up the test process by performing the required checks automatically, and confirming the expected behavior of the software under test. It also helps you catch the bugs and errors in the software easily, thus aiding you in getting faster feedback on the builds. Assertions are Boolean expressions confirming where the specified condition works and are considered passed per the application’s behavior. If the outcome condition is true, the test will pass; if the outcome is false, the test will fail. Consider an example of a Login web page, where the test is to check the login functions properly. Here, we make an assertion condition that if the **Logout** button is displayed successfully after Login, the test is marked as passed. If the **Logout** button is not displayed after Login, the assertion will fail, marking the test as a failure. There are two types of [assertions in automated testing](https://www.lambdatest.com/blog/junit-assertions-example-for-selenium-testing/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=blog): * Hard Assertion * Soft Assertion ### Hard Assertion Hard assertions ensure that the test execution is stopped when the asserting condition is not met. The next steps or tests, if any, will only proceed if the asserting condition is evaluated to be true. This helps in the automated pipeline as it turns red in case of test failures, and test execution is halted until the necessary fixes are made to the build. ***Don’t let CSV errors slow you down. Validate and lint your CSV data with ease using our free online*** [***CSV Validator***](https://www.lambdatest.com/free-online-tools/csv-validator?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=free_online_tools) ***tool. Get accurate and error-free results in seconds.*** ### Soft Assertion With Soft assertions, the test execution is not stopped if the asserting condition is not met. Even after the assertion fails, the test execution will continue until it reaches the test’s end. After the tests are executed, all the respective failures will be displayed in the logs. Soft assertions should be used when the tests are not dependent on each other and passing one test condition does not impact the upcoming tests. Having learned about the two types of assertions, Let us quickly move towards learning the *assertTrue()* in Java for performing assertions in the automated tests. ### What is the assertTrue() in Java? *assertTrue()* method is available in the *“org.junit.Assertions”* class in JUnit 5. The Assertions class in JUnit 5 is a collection of utility methods supporting asserting test conditions. {% youtube mkIp_xbbs-w %} It verifies the supplied condition and checks if it is True. Following are the two overloaded *assertTrue()* methods provided by JUnit 5: ### assertTrue(boolean condition) In this method, the Boolean condition is supplied as a parameter. If the condition is true, the test passes or is marked as a failure. **Syntax** ![image](https://cdn-images-1.medium.com/max/1200/0*h5TGPWKvtp1VL7BB.png) ### assertTrue(boolean condition, String message) In this method, the Boolean condition is supplied as the first parameter, and the second parameter is the message text displayed in case the condition fails. **Syntax** ![image](https://cdn-images-1.medium.com/max/1200/0*jlDU_8UvcVHbKaer.png) In the next section, let’s learn how to use these methods in the test. ***Minify your JS code without changing its functionality with our easy-to-use*** [***JavaScript Minifier***](https://www.lambdatest.com/free-online-tools/js-minify?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=free_online_tools) ***that reduces the size of your scripts and improve website speed.*** ### How to use the assertTrue() method in the test? There are two ways to use the *assertTrue()* method in the test. The first is using the *Assertions* class and then calling the *assertTrue()* in Java. ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1ycbcdnpgk9ku8i4l3fw.png) The second way is to directly import the static *assertTrue()* method in the test. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/72dmxgako9rhs38heldy.png) ### Setting up the project Let’s now delve into the test scenarios and check the actual working of the *assertTrue()* in Java. We will use the following tech stack to demonstrate and run the tests on the LambdaTest cloud grid. LambdaTest is an AI-powered test orchestration and execution platform that lets you perform [Selenium Java testing](https://www.lambdatest.com/selenium-java-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) at scale on an [online browser farm](https://www.lambdatest.com/online-browser-farm?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) of 3000+ real web browsers and operating systems. You even run your automated test suites with [Selenium](https://www.lambdatest.com/selenium?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) in parallel and achieve faster software release cycles. {% youtube SqQ8SugRDos %} Subscribe to the [LambdaTest YouTube Channel](https://www.youtube.com/c/LambdaTest?sub_confirmation=1) and stay updated with the latest tutorials around [automated testing](https://www.lambdatest.com/learning-hub/automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub), [Selenium testing](https://www.lambdatest.com/selenium-automation?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage), [Java automation testing](https://www.lambdatest.com/java-automation-testing?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage), and more. ![image](https://cdn-images-1.medium.com/max/1200/1*EuTtyKyqSjXryd84zfOAAQ.png) First, let’s create a Maven project and add the dependencies for [Selenium WebDriver](https://www.lambdatest.com/learning-hub/webdriver?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub) and JUnit 5 in the *pom.xml* file to set up the project. ```plaintext <project xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xmlns="http://maven.apache.org/POM/4.0.0" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>io.github.mfaisalkhatri</groupId> <artifactId>junit-examples</artifactId> <version>1.0-SNAPSHOT</version> <packaging>jar</packaging> <name>junit-examples</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> <maven.compiler.source>17</maven.compiler.source> <maven.compiler.target>17</maven.compiler.target> <junit.version>5.10.0</junit.version> <selenium.version>4.12.1</selenium.version> <junit.platform.launcher.version>1.10.0</junit.platform.launcher.version> <maven-surefire-plugin.version>3.1.2</maven-surefire-plugin.version> </properties> <dependencies> <!-- https://mvnrepository.com/artifact/org.junit.jupiter/junit-jupiter --> <dependency> <groupId>org.junit.jupiter</groupId> <artifactId>junit-jupiter</artifactId> <version>${junit.version}</version> <scope>test</scope> </dependency> <!-- https://mvnrepository.com/artifact/org.seleniumhq.selenium/selenium-java --> <dependency> <groupId>org.seleniumhq.selenium</groupId> <artifactId>selenium-java</artifactId> <version>${selenium.version}</version> </dependency> <!-- https://mvnrepository.com/artifact/org.junit.platform/junit-platform-launcher --> <dependency> <groupId>org.junit.platform</groupId> <artifactId>junit-platform-launcher</artifactId> <version>${junit.platform.launcher.version}</version> <scope>test</scope> </dependency> </dependencies> <build> <plugins> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-surefire-plugin</artifactId> <version>${maven-surefire-plugin.version}</version> <configuration> <includes>**/*Test*.java</includes> </configuration> </plugin> </plugins> </build> </project> ``` The project setup is complete, with the dependencies updated in the pom.xml, and we can now proceed to write the automated tests. ### Test Scenarios We will cover the following test scenarios as a part of the automation test to demo the working of the *assertTrue()* in Java. **Test Scenario 1** 1. Open the [LambdaTest’s Selenium Playground](https://www.lambdatest.com/selenium-playground/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) website. 2. Navigate to the [Checkbox Demo](https://www.lambdatest.com/selenium-playground/checkbox-demo?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) screen. 3. In the *Disabled Checkbox Demo* section, tick *Option 1*. 4. Using *assertTrue()*, verify that the *Option 1* checkbox is ticked successfully. 5. Next, using *assertTrue()*, verify that the *Option 3* checkbox is disabled. ***LambdaTest’s Selenium Playground*** ![image](https://cdn-images-1.medium.com/max/1200/0*eE9ll_Y1tBe9yBlk.png) ***Checkbox Demo Screen*** ![image](https://cdn-images-1.medium.com/max/1200/0*WHWOXaBrawWTD828.png) **Test Scenario 2** 1. Open the [LambdaTest’s Selenium Playground](https://www.lambdatest.com/selenium-playground/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) website. 2. Navigate to the [Redirection](https://www.lambdatest.com/selenium-playground/redirection?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) page. 3. *Using assertTrue()*, verify that the page header is displayed on successfully loading the Redirection Page. ***LambdaTest’s Selenium Playground*** ![image](https://cdn-images-1.medium.com/max/1200/0*p1luALFRAvmRfuE7.png) ***Redirection Page*** ![image](https://cdn-images-1.medium.com/max/1200/0*9RL7lvDTD6Iv3rxX.png) **Test Scenario 3** 1. Open the [LambdaTest’s Selenium Playground](https://www.lambdatest.com/selenium-playground/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) website. 2. Navigate to the [Data List Filter](https://www.lambdatest.com/selenium-playground/redirection?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) page. 3. Search for a record by entering the *Attendee* name. 4. Using *assertTrue()*, verify that the data retrieved contains the *Attendee* name entered in the search box. ***LambdaTest’s Selenium Playground*** ![image](https://cdn-images-1.medium.com/max/1200/0*RAWo-C-bxAy6lRUD.png) ***Data List Filter Screen*** ![image](https://cdn-images-1.medium.com/max/1200/0*fYb33CS-zD5VwcEX.png) We will run the tests on the Chrome browser on the Windows 10 platform using the LambdaTest Cloud grid. ***Transform your messy CSS code into beautiful and organized code with our user-friendly and efficient online free*** [***CSS Prettifier***](https://www.lambdatest.com/free-online-tools/css-prettify?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=free_online_tools) ***tool with just a few clicks.*** ### Implementation \[Test Scenario 1\] In test scenario 1, we must open [LambdaTest’s Selenium Playground](https://www.lambdatest.com/selenium-playground/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) website and navigate to the [Checkbox Demo](https://www.lambdatest.com/selenium-playground/checkbox-demo?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) screen. We need to tick the Option 1 checkbox in the *Disabled Checkbox Demo* section and use the *assertTrue()* in Java to verify that it is ticked successfully. We will also check that the *Option 3* checkbox is disabled using the *assertTrue()* method. The following automated test method named *checkboxDemoTest()* is written in the *SeleniumPlaygroundTests* class that will help us achieve the automation of the test scenario: **Filename:** [SeleniumPlaygroundTests.java](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) ```plaintext @Test public void checkboxDemoTest() { createDriver(Browsers.REMOTE_CHROME); final String website = "https://www.lambdatest.com/selenium-playground/"; getDriver().get(website); final HomePage homePage = new HomePage(); homePage.navigateToLink("Checkbox Demo"); final var checkboxDemoPage = new CheckboxDemoPage(); assertTrue(checkboxDemoPage.checkIfCheckboxOneIsTicked(), "Check box one is not ticked"); assertTrue(checkboxDemoPage.checkIfCheckboxThreeIsDisabled()); } ``` The first line of the method will create an instance of the Chrome browser in the LambdaTest Cloud grid. *createDriver()* method is a static method in the *DriverManager* class that will help us instantiate a new instance of WebDriver. **Filename:** [DriverManager.java](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/qhv0hwe54nys1dfasb65.png) The browser’s name passed in the *createDriver()* method parameter will get started. *The REMOTE\_CHROME* browser name will call the *setupChromeInRemote()* method. ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fod14mb0rumpe5ch08lm.png) This method will set all the desired capabilities and the configuration required for running the tests on the LambdaTest Cloud grid. These capabilities can be directly copied using the [LambdaTest Capabilities Generator](https://www.lambdatest.com/capabilities-generator/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage). ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/cwn3iu5llr0hcider9sg.png) Once the driver session is created and the Chrome browser is started, the next line in the test will navigate the user to the [LambdaTest Selenium Playground](https://www.lambdatest.com/selenium-playground/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) website. ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/un5olb17ylvlx3dn03zh.png) On the website’s Homepage, the [*Checkbox Demo*](https://www.lambdatest.com/selenium-playground/checkbox-demo?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) link will be clicked to navigate the user to the *Checkbox Demo* page. ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/oiogknkcjkd6okawi3bd.png) The [Page Object Model](https://www.lambdatest.com/blog/selenium-java-testing-page-object-model/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=blog) is used in this project as it helps test readability, reduces code duplication, and acts as an interface for the web page under test. The *HomePage* class houses the page objects for the *Home Page* of LambdaTest’s Selenium Playground website. As the Homepage contains links to different website windows, a generic method has been created to search the links using the respective *LinkText* and then click on it to navigate to the required page. **Filename:** [HomePage.java](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) ```plaintext public class HomePage { public void navigateToLink (final String linkText) { getDriver ().findElement (By.linkText (linkText)).click (); } } ``` The next step is to tick the *Option 1* checkbox under the *Disabled Checkbox Demo* section and verify that it is ticked successfully using the *assertTrue()* in Java. ![image](https://cdn-images-1.medium.com/max/1200/0*TDSG07PhHUc_42Ag.png) We will also provide a message in the *assertTrue()* method so that if the test fails, this text will be printed in the console for easy interpretation of the failure. The page objects for the *Checkbox* page are updated in the *CheckboxDemoPage* class. The *checkboxOne()* method will return a WebElement for the *Option 1* checkbox. [CSS Selectors](https://www.lambdatest.com/learning-hub/css-selectors?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=learning_hub) are faster and simpler than [XPath](https://www.lambdatest.com/blog/complete-guide-for-using-xpath-in-selenium-with-examples/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=blog), allowing for a clearer method to locate web elements. We can use the CSS Selector *“div:nth-child(2) &gt; div:nth-child(1) &gt; input\[type=”checkbox”\]”* to locate the Option 1 checkbox. ![image](https://cdn-images-1.medium.com/max/1200/0*dtGd7zVvkqFQzblG.png) The *checkIfCheckboxOneIsTicked()* method will tick the *Option 1* checkbox and return the boolean value stating if the checkbox is selected. ![image](https://cdn-images-1.medium.com/max/1200/0*fzJgVjn1gnwRq54v.png) **Filename:** [CheckboxDemoPage.java](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) ```plaintext public class CheckboxDemoPage { public WebElement checkboxOne() { return getDriver().findElement(By.cssSelector("div:nth-child(2) > div:nth-child(1) > input[type=\"checkbox\"]")); } public WebElement checkboxThree() { return getDriver().findElement(By.cssSelector("div:nth-child(2) > div:nth-child(3) > input[type=\"checkbox\"]")); } public void tickCheckBoxOne () { checkboxOne().click(); } public boolean checkIfCheckboxOneIsTicked() { tickCheckBoxOne(); return this.checkboxOne().isSelected(); } public boolean checkIfCheckboxThreeIsDisabled() { return !this.checkboxThree().isEnabled(); } } ``` The next assertion performed using the *assertTrue()* in Java is to check that the *Option 3* checkbox is disabled. ![image](https://cdn-images-1.medium.com/max/1200/0*wrgxk3-0oy2HMAJm.png ) The *checkIfCheckboxThreeIsDisabled()* method from the *CheckboxPage* class is used to check the disabled state of the checkbox. The point to note here is that we are using the *“!”* while returning the output of the boolean condition from the *checkIfCheckboxThreeIsDisabled()* method that uses the *isEnabled()* method. So, ideally, it will check that the checkbox is disabled and return the output as true, else it will return *false*. ***Get faster loading times and better user experience with our efficient*** [***JSON minify***](https://www.lambdatest.com/free-online-tools/json-minify?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=free_online_tools) ***tool. Quickly compress your JSON data with ease and optimize your website now.*** **Test Execution** The LambdaTest Username and Access Key must be provided for authentication to run tests successfully on the LambdaTest Cloud grid. These values can be updated in the *Run Configuration* window, which can be opened using the *Modify Run Configuration* option by clicking the Green Play button beside the test method name. ![image](https://cdn-images-1.medium.com/max/1200/0*9bHtsXf0hvI9fRpO.png) The LambdaTest Username and Access Key values can be copied from the *Profile &gt;&gt; Account Settings &gt;&gt; Password and Security window*. ![image](https://cdn-images-1.medium.com/max/1200/0*OZ6EqQbpJYGHHVT8.png) In the Run Configuration window, update the values for the Username and Access Key as follows : *\-DLT\_USERNAME= &lt; LambdaTest Username &gt; -DLT\_ACCESS\_KEY=&lt; LambdaTest\_Access\_Key &gt;* ![image](https://cdn-images-1.medium.com/max/1200/0*PYxcQ8j7GGeHFwGh.png) Once the credentials are successfully updated in the configuration window, the tests can be executed by selecting Run Configuration from the dropdown in the top menu bar and clicking on the green button. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1q27urwopvwvpa0auz4e.png) ***Screenshot of the test execution*** ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/noby5mpaj3z8r5wxmalh.png) ![image](https://cdn-images-1.medium.com/max/1200/0*OuFAW1cgKAqN_RPf.png) All the test details can be viewed on the LambdaTest dashboard, including platform name, browser name, resolution, test execution logs, time taken to execute the tests, etc. ***Don’t waste time debugging your YAML files. Use our free*** [***YAML validator***](https://www.lambdatest.com/free-online-tools/yaml-validator?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=free_online_tools) ***tool to validate your YAML code quickly and identify syntax errors and fix them.*** ### Implementation \[Test Scenario 2\] In test scenario 2, we will navigate to the [Redirection](https://www.lambdatest.com/selenium-playground/redirection?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) page on [LambdaTest’s Selenium Playground](https://www.lambdatest.com/selenium-playground/?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) website and verify that the page title is displayed using the *assertTrue()* in Java. The steps for test scenario 2 will be automated using the *redirectionPageTest()* method created inside the [*SeleniumPlaygroundTests.java*](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) class. **Filename:** [SeleniumPlaygroundTests.java](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) ```plaintext @Test public void redirectionPageTest() { createDriver(Browsers.REMOTE_CHROME); final String website = "https://www.lambdatest.com/selenium-playground/"; getDriver().get(website); final HomePage homePage = new HomePage(); homePage.navigateToLink("Redirection"); final var redirectionPage = new RedirectionPage(); assertTrue(redirectionPage.isPageTitleDisplayed()); } ``` The initial 3 lines in the test are the same as we discussed in the test scenario 1 implementation. We will not discuss the configuration and setup for creating a WebDriver session and launching the Chrome browser. Once the website is loaded on the browser, the *Redirection* page is navigated, and further interaction is taken to check that the title is displayed on the page. ```plaintext public class RedirectionPage { public boolean isPageTitleDisplayed() { return getDriver().findElement(By.tagName("h1")).isDisplayed(); } } ``` The *RedirectionPage* class is created to update all the page objects related to the *Redirection* page. The *isPageTitleDisplayed()* method returns a boolean value using the Selenium WebDriver’s *isDisplayed()* method. First, the Page title will be located, and then, using the *isDisplayed()* method, the verification will be done to check that it is displayed on the page. The *assertTrue()* method is finally used in the test to check the result of the boolean condition returned by the *isPageTitleDisplayed()* method. **Test Execution** The steps followed while executing Test Scenario 1 for setting the LambdaTest *Username* and *AccessKey* to execute the tests on the LambdaTest Cloud grid. The same steps need to be followed while executing this test. As we have already set the *LambdaTest Username* and *AccessKey* in the Configuration window, we can re-use the same window and just change the test method name to run this test. Click the Run/Debug Configuration dropdown and select the *Edit Configuration* option. ![image](https://cdn-images-1.medium.com/max/1200/0*Pjjwl-Ek3ahemGqo.png) Select the *redirectionPageTest()* method name in the configuration window by clicking on the *three dots* next to the *Method name* field. Leave the other settings/configurations as they are and click “Apply” and then the “OK” button to close the configuration window. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/9lcuoe0c817a456smme5.png) The tests can now be executed by clicking on the green button next to the Run/Debug configuration field. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wigei2f0sc0krlcy3znc.png) ***Screenshot of the test execution*** ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/asywvftp8vdi5d8076yt.png) ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/ho5whd260ccvqlrym11t.png) The test execution can be checked in the LambdaTest Build details windows after logging in to the LambdaTest website. This window provides granular details of the test execution that help know the test analytics and execution status. ### Implementation \[Test Scenario 3\] In test scenario 3, we will navigate to the [Data Filter List](https://www.lambdatest.com/selenium-playground/data-list-filter-demo?utm_source=devto&utm_medium=organic&utm_campaign=jan_12&utm_term=ap&utm_content=webpage) page on [LambdaTest’s Selenium playground](https://www.lambdatest.com/selenium-playground/) website to search for an attendee record by entering a name. Once the records are retrieved based on the filter according to the name provided, the record will be verified using the *assertTrue()* method by checking if the Attendee name in the record contains the name searched for. The steps for test scenario 3 will be automated using the *dataFilterPageTest()* method that is updated inside the [*SeleniumPlaygroundTests.java*](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) class. **Filename:** [SeleniumPlaygroundTests.java](https://www.lambdatest.com/automation-testing?utm_source=devto&amp;utm_medium=organic&amp;utm_campaign=jan_12&amp;utm_term=ap&amp;utm_content=webpage) ```plaintext @Test public void dataFilterPageTest() { createDriver(Browsers.REMOTE_CHROME); final String website = "https://www.lambdatest.com/selenium-playground/"; getDriver().get(website); final HomePage homePage = new HomePage(); homePage.navigateToLink("Data List Filter"); final var dataListFilterPage = new DataListFilterPage(); final String attendeeName = "Dwayne"; dataListFilterPage.searchAttendees(attendeeName); assertTrue(dataListFilterPage.getAttendeeName().contains(attendeeName)); } ``` In this test, we will first open the Chrome browser in the LambdaTest Cloud grid and navigate to the *Data List Filter* Page on *LambdaTest’s Selenium Playground* website. Next, it will search for the data with the attendee name “*Dwayne*”. Once the data is loaded on the page, the *assertTrue()* method will check the condition that the attendee name in the results contains the text — “*Dwayne*”. The test will be marked as pass if the text is found in the Attendee’s name in the data results. **Test Execution** The test for scenario 3 can be easily run by just changing the method name from the Run Configuration window, as shown in the screenshot below. ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/io1m388246loq9z5ns6j.png) Once the method is selected, click the Apply button and then the OK button to close the config window. To run the test, click the Green Play button on the toolbar on the top, ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/f3mayx5gtoau9tvpc7uu.png) ***Screenshot of the test execution*** ![Image ](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h0r34ahnv4ljzf4r9he8.png) ![Image](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/4t7dwee5yxtzfyuxgtrh.png) ### Conclusion Without assertions, an automated test is incomplete. Assertions help us to derive the results of the automated tests. This can help in providing faster feedback on the software builds. *assertTrue()* in Java is used to perform assertions in automated tests. We can verify a boolean condition using this method. This web automation method can ideally be used to check if a particular web element, like a checkbox or a radio button is enabled. It can also help in checking if a particular text is displayed. Further, it can be used for checking boolean conditions like text on a web page containing a particular string or text. I hope this helps you in writing better assertions for the automated tests. Happy Testing!
faisalkhatri123
1,729,429
Buy Google 5 Star Reviews
https://dmhelpshop.com/product/buy-google-5-star-reviews/ Buy Google 5 Star Reviews Reviews...
0
2024-01-16T09:20:10
https://dev.to/ronrobinsonofficeal/buy-google-5-star-reviews-9
programming, react, python, tutorial
ERROR: type should be string, got "https://dmhelpshop.com/product/buy-google-5-star-reviews/\n![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/1s8ax873gtnv6f8oot04.png)\n\nBuy Google 5 Star Reviews\nReviews represent the opinions of experienced customers who have utilized services or purchased products from various online or offline markets. These reviews convey customer demands and opinions, and ratings are assigned based on the quality of the products or services and the overall user experience. Google serves as an excellent platform for customers to leave reviews since the majority of users engage with it organically. When you purchase Buy Google 5 Star Reviews, you have the potential to influence a large number of people either positively or negatively. Positive reviews can attract customers to purchase your products, while negative reviews can deter potential customers.\n\nIf you choose to Buy Google 5 Star Reviews, people will be more inclined to consider your products. However, it is important to recognize that reviews can have both positive and negative impacts on your business. Therefore, take the time to determine which type of reviews you wish to acquire. Our experience indicates that purchasing Buy Google 5 Star Reviews can engage and connect you with a wide audience. By purchasing positive reviews, you can enhance your business profile and attract online traffic. Additionally, it is advisable to seek reviews from reputable platforms, including social media, to maintain a positive flow. We are an experienced and reliable service provider, highly knowledgeable about the impacts of reviews. Hence, we recommend purchasing verified Google reviews and ensuring their stability and non-gropability.\n\nLet us now briefly examine the direct and indirect benefits of reviews:\nReviews have the power to enhance your business profile, influencing users at an affordable cost.\nTo attract customers, consider purchasing only positive reviews, while negative reviews can be acquired to undermine your competitors. Collect negative reports on your opponents and present them as evidence.\nIf you receive negative reviews, view them as an opportunity to understand user reactions, make improvements to your products and services, and keep up with current trends.\nBy earning the trust and loyalty of customers, you can control the market value of your products. Therefore, it is essential to buy online reviews, including Buy Google 5 Star Reviews.\nReviews serve as the captivating fragrance that entices previous customers to return repeatedly.\nPositive customer opinions expressed through reviews can help you expand your business globally and achieve profitability and credibility.\nWhen you purchase positive Buy Google 5 Star Reviews, they effectively communicate the history of your company or the quality of your individual products.\nReviews act as a collective voice representing potential customers, boosting your business to amazing heights.\nNow, let’s delve into a comprehensive understanding of reviews and how they function:\nGoogle, with its significant organic user base, stands out as the premier platform for customers to leave reviews. When you purchase Buy Google 5 Star Reviews , you have the power to positively influence a vast number of individuals. Reviews are essentially written submissions by users that provide detailed insights into a company, its products, services, and other relevant aspects based on their personal experiences. In today’s business landscape, it is crucial for every business owner to consider buying verified Buy Google 5 Star Reviews, both positive and negative, in order to reap various benefits.\n\nSince both positive and negative reviews have an impact on online businesses and trading activities, it is important to determine which type of reviews align with your objectives. If your aim is to influence potential customers online and attract organic traffic, then investing in positive Buy Google 5 Star Reviews is recommended. However, it is crucial to prioritize security and only purchase verified Google reviews. On the other hand, if you wish to acquire negative Google reviews, it is advisable to first gather relevant feedback and reviews.\n\nWhy are Google reviews considered the best tool to attract customers?\nGoogle, being the leading search engine and the largest source of potential and organic customers, is highly valued by business owners. Many business owners choose to purchase Google reviews to enhance their business profiles and also sell them to third parties. Without reviews, it is challenging to reach a large customer base globally or locally. Therefore, it is crucial to consider buying positive Buy Google 5 Star Reviews from reliable sources. When you invest in Buy Google 5 Star Reviews for your business, you can expect a significant influx of potential customers, as these reviews act as a pheromone, attracting audiences towards your products and services. Every business owner aims to maximize sales and attract a substantial customer base, and purchasing Buy Google 5 Star Reviews is a strategic move.\n\nAccording to online business analysts and economists, trust and affection are the essential factors that determine whether people will work with you or do business with you. However, there are additional crucial factors to consider, such as establishing effective communication systems, providing 24/7 customer support, and maintaining product quality to engage online audiences. If any of these rules are broken, it can lead to a negative impact on your business. Therefore, obtaining positive reviews is vital for the success of an online business. To attract a large customer base, it is necessary to purchase Buy Google 5 Star Reviews for both local and international markets. Additionally, buying reviews from other platforms can further boost your business profile.\n\nWhat are the benefits of purchasing reviews online?\nIn today’s fast-paced world, the impact of new technologies and IT sectors is remarkable. Compared to the past, conducting business has become significantly easier, but it is also highly competitive. To reach a global customer base, businesses must increase their presence on social media platforms as they provide the easiest way to generate organic traffic. Numerous surveys have shown that the majority of online buyers carefully read customer opinions and reviews before making purchase decisions. In fact, the percentage of customers who rely on these reviews is close to 97%. Considering these statistics, it becomes evident why we recommend buying reviews online. In an increasingly rule-based world, it is essential to take effective steps to ensure a smooth online business journey.\n\nBuy Google 5 Star Reviews\nMany people purchase reviews online from various sources and witness unique progress. Reviews serve as powerful tools to instill customer trust, influence their decision-making, and bring positive vibes to your business. Making a single mistake in this regard can lead to a significant collapse of your business. Therefore, it is crucial to focus on improving product quality, quantity, communication networks, facilities, and providing the utmost support to your customers.\n\nReviews reflect customer demands, opinions, and ratings based on their experiences with your products or services. If you purchase Buy Google 5-star reviews, it will undoubtedly attract more people to consider your offerings. Google is the ideal platform for customers to leave reviews due to its extensive organic user involvement. Therefore, investing in Buy Google 5 Star Reviews can significantly influence a large number of people in a positive way.\n\nHow to generate google reviews on my business profile?\nFocus on delivering high-quality customer service in every interaction with your customers. By creating positive experiences for them, you increase the likelihood of receiving reviews. These reviews will not only help to build loyalty among your customers but also encourage them to spread the word about your exceptional service. It is crucial to strive to meet customer needs and exceed their expectations in order to elicit positive feedback. If you are interested in purchasing affordable Google reviews, we offer that service.\n\nOnce you have established a strong rapport with your customers through the provision of quality service, kindly request them to share their experiences on Google voluntarily. You can provide them with a direct link or clear instructions on how to leave a review. If possible, offering them a written script can simplify the process for them. Additionally, we offer the option to buy online reviews from us at a reasonable price, with a 100% replacement and cash back guarantee.\n\nIt is essential to reply or respond to the customer opinions left as reviews promptly. Make it easy for customers to leave reviews by prominently displaying review options on your website and social media profiles. Furthermore, consider offering incentives to customers who assist you by leaving reviews, such as providing them with better service at a discounted price.\n\nAlternatively, if you are interested in generating verified Buy Google 5 Star Reviews for your website, you can quickly reach out to dmhelpshop.com. Our team of experts is readily available to help you purchase verified Google reviews at cost-effective prices.\n\nNow, let’s discuss how Google reviews work and the value they add.\nAccording to research conducted by various platforms in the field of online marketing, users tend to engage with reviews that they perceive as authentic. Once a review is submitted, it undergoes a moderation process to ensure compliance with Google’s content guidelines. Another study reveals that many individuals rely on reviews to inform their purchasing decisions. By purchasing online reviews from a trustworthy source, you can significantly enhance your business’s reputation in a short period of time.\n\nContact Us / 24 Hours Reply\nTelegram:dmhelpshop\nWhatsApp: +1 ‪(980) 277-2786\nSkype:dmhelpshop\nEmail:dmhelpshop@gmail.com"
ronrobinsonofficeal
1,731,106
Technical Writing Guidelines to Create AI-Friendly Content
The widespread adoption of artificial intelligence is fundamentally changing how people engage with...
0
2024-01-16T10:22:32
https://dev.to/ragavi_document360/technical-writing-guidelines-to-create-ai-friendly-content-k8k
The widespread adoption of artificial intelligence is fundamentally changing how people engage with information. Technological advancements, including Generative AI tools like ChatGPT and Bard, are reshaping the behaviors of content consumers. This shift is characterized by the following patterns: - Increased efficiency in task completion - On-demand access to documentation across various devices and formats - A strong demand for accurate and reliable answers As a result, technical communicators are facing new responsibilities. They are now tasked with ensuring the accuracy and trustworthiness of information delivered through Generative AI tools. Integrating these tools into existing knowledge bases presents an opportunity for organizations to enhance customer experiences. However, it also requires a thorough review and adaptation of content to align with the characteristics of AI-based agents, such as chatbots, assistive search, and Q&A bots. The responsibility of maintaining digital trust through these tools cannot be overstated. ## Characteristics of GenAI-based agents If your customers are utilizing ChatGPT-like assistive search, and your existing content is not tailored to accommodate the characteristics of GenAI-based agents, it is high time to conduct a content audit. The underlying content must be GenAI-friendly to ensure that it serves your customers with trustworthy responses. The GenAI-based agents are text-hungry thus underlying content must be as explanatory as possible. More importantly, the underlying content must be written in a conversational style in a more generic persona. Guidelines to write content for GenAI-based agents are evolving. The technical writer’s community is also suggesting various tips for improvising the existing content. Let’s look at some of the emerging guidelines to produce GenAI-friendly content. Also Checkout:[ How Technical Writers Can Utilize ChatGPT?](https://document360.com/blog/chatgpt-for-technical-writing/) ## Top 8 Guidelines to Create GenAI-friendly Content Writing content that is easily understood by GenAI-based agents involves incorporating clear language, structured formatting, and adherence to some specific guidelines. These guidelines are listed below: ### Guideline #1: Write elaborate content Rather than choosing brevity for your content, write the content as explanatory as possible. Elaborate content with more information that helps GenAI-based agents get a holistic perspective of the topic covered in your article. Use simple English words to write content rather than bombastic words. This helps in assimilating content and helps to answer your user queries. E.g. For this getting started article from Airtable is about a 16-minute read. ### Guideline #2: Create FAQs for each article Create at least 5 – 10 FAQs for each article content. The questions for these FAQs can be sourced from the customer support team, customer success team, sales team, product team, and so on. These FAQs help with quick retrieval of information for GenAI-based agents to produce an accurate response to their queries in a short period. Here is an example of writing FAQs. ### Guideline #3: Use consistent business terms Use consistent business terms across your knowledge base. The common definitions of business vocabulary help large language models to understand the context better. E.g. if you are using terms such as clients, customers, users, and stakeholders synonymously but they have different business definitions, GenAI might get confused as the “sentence similarity” between those terms is very close. GenAI-based might produce inconsistent responses if those terms are used in the customer’s questions. Here is an example of a business glossary built with a list of all terms that are consistently used across all knowledge base articles. To read the full blog post: [Creating AI-Compatible Content: Guidelines for Technical Writing (document360.com)](https://document360.com/blog/chatgpt-for-technical-writing/)
ragavi_document360
1,731,117
React Forms: Controlled and Uncontrolled Components.
React provides developers with two main approaches to handling form data: controlled and uncontrolled...
0
2024-01-16T10:36:41
https://dev.to/delaquash/react-forms-controlled-and-uncontrolled-components-3kcm
webdev, react, javascript, frontend
React provides developers with two main approaches to handling form data: controlled and uncontrolled forms. In most cases, the recommended method by the React teams for forms are the controlled method, although the uncontrolled method is still valid, it gives less flexibility compared to the controlled form. ## Controlled Forms A controlled form in React is a form where the form elements (like input fields) are controlled by the state of the React component. In other words, the values of the form elements are stored in the component's state, and any changes to the form elements are handled by updating the state. React encourages the use of controlled components for forms, where the component state manages the values of form elements. The provided code snippet illustrates the concept of a controlled form using React hooks. ![React Controlled Form](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xhj9ito4okzdmjkwj995.png) In this example, the useState hook is employed to initialize a piece of state named value with an initial value of an empty string. The setValue function is later used to update this state. Input Element Binding The input element is bound to the value state, ensuring that its value is always controlled by the React component's state. The onChange event is connected to the handleChange function, which updates the state with the latest value as the user types. ### Benefits of Controlled Forms 1. Predictable State: The state of the form elements is predictable and can be easily managed by React. 2. Single Source of Truth: The component state serves as a single source of truth for the form data, making it easier to track and manage. 3. Dynamic Updates: Reacting to user input is seamless, allowing for dynamic updates based on the component state. ### Uncontrolled Forms ![React Uncontrolled Form](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/wrxvl63xjav5i9u83b4u.png) In this example, an uncontrolled form is created using a combination of useRef and traditional HTML form elements. The inputRef is utilized to reference the DOM input element without managing its state through React. ### Uncontrolled Form Characteristics 1. No React State Management: Unlike controlled forms where form elements are connected to React state, uncontrolled forms do not rely on state for managing form data. 2. Direct DOM Interaction: The ref is used to directly access the DOM element, allowing developers to interact with the form elements without involving React state. 3. No Event Listeners for Value Changes: Uncontrolled forms do not attach onChange event listeners to track input changes. Instead, the value is directly accessed when needed, as shown in the handleSubmit function. ### Benefits of Uncontrolled Forms 1. Simplicity: Uncontrolled forms are simpler and might be preferred for scenarios where React's state management is not necessary. 2. Ref-Based Interaction: Directly interacting with the DOM through ref provides a straightforward way to access form data without the need for state updates. Compatibility with Non-React Code: Uncontrolled forms can be useful when integrating React into projects with existing non-React code, as they align more closely with traditional HTML forms. ### Considerations 1. Limited React Features: Uncontrolled forms sacrifice some of React's powerful features, such as easy state management and dynamic updates. 2. Validation and Testing: Implementing form validation and testing might be more challenging in uncontrolled forms compared to controlled ones. ##Conclusion Controlled forms provide a structured way to handle form data in React applications. By managing the form elements through component state, developers gain more control over the form's behavior and can easily integrate it with other React features.
delaquash
1,731,188
DIY Customization: Transform Your Plain Sweatshirt into a Fashion Statement
Are you tired of wearing plain and boring sweatshirts? Do you want to stand out from the crowd and...
0
2024-01-16T12:09:11
https://dev.to/isrealwelch/diy-customization-transform-your-plain-sweatshirt-into-a-fashion-statement-3i4p
Are you tired of wearing plain and boring sweatshirts? Do you want to stand out from the crowd and make a fashion statement? Look no further! In this blog post, we will guide you through the process of transforming your plain **[sweatshirt](https://www.meant2bekids.com/)** into a unique and stylish piece that will turn heads wherever you go. Get ready to unleash your creativity and show off your personal style! Why Customize? Express Your Individuality Your style is a reflection of your personality. By customizing your sweatshirt, you can express your individuality and showcase your unique taste. No more blending in with the crowd - it's time to let your true colors shine! Save Money Why spend a fortune on designer sweatshirts when you can create your own one-of-a-kind piece at a fraction of the cost? DIY customization allows you to unleash your creativity without breaking the bank. Plus, you'll have the satisfaction of wearing something that is truly unique and made with love. Reduce Waste In a world where fast fashion is the norm, DIY customization is a sustainable alternative. By repurposing and redesigning your old sweatshirts, you can reduce waste and contribute to a more eco-friendly fashion industry. It's a win-win situation for both your style and the planet! Step-by-Step Guide Step 1: Preparing the Sweatshirt Start by washing and drying your sweatshirt to ensure a clean canvas for your customization. Iron out any wrinkles to make the surface smooth and ready for design. Step 2: Sketching Your Design Using a fabric marker or paint, sketch your design directly onto the sweatshirt. You can draw freehand or use stencils or stickers for more precise shapes. Let your imagination run wild and create a design that speaks to you. Step 3: Adding Embellishments If you want to take your customization to the next level, consider adding embellishments like iron-on patches, appliques, or beads. These small details can make a big impact and elevate your sweatshirt from plain to extraordinary. Step 4: Sewing Details (optional) If you're handy with a needle and thread, you can take your customization even further by sewing additional details onto your sweatshirt. This could be anything from embroidery to patches to unique stitching patterns. Get creative and make your sweatshirt truly one-of-a-kind! Step 5: Show it Off! Once your customization is complete, proudly wear your transformed sweatshirt and let it be a reflection of your style and creativity. Stand tall, strut your stuff, and watch heads turn as you rock your personalized fashion statement! Conclusion Customizing your plain sweatshirt is a fun and creative way to express your individuality, save money, and reduce waste. With a few simple materials and a lot of imagination, you can transform a plain piece of clothing into a fashion statement that is uniquely yours. So go ahead, unleash your creativity, and let your sweatshirt tell your story!
isrealwelch
1,731,270
Legacy Electrical : Electrician Nottingham and EICR Nottingham
When it comes to electrical requirements, Legacy Electrical stands out as the definitive choice,...
0
2024-01-16T12:57:21
https://dev.to/business123/legacy-electrical-electrician-nottingham-and-eicr-nottingham-466a
eicr, electrical
When it comes to electrical requirements, Legacy Electrical stands out as the definitive choice, excelling in Electrician services in Nottingham and EICR services. Our highly skilled professionals are dedicated to providing top electrical solutions, emphasizing safety, compliance, and efficiency. Why Choose Legacy[ Electrical in Nottingham](https://legacyelectrical.co.uk)? Expertise in Electrician Services and [EICR in Nottingham](https://legacyelectrical.co.uk/): Our certified electricians bring a wealth of experience and specialised knowledge to every project, focusing on Electrician services and thorough EICR assessments, ensuring compliance with Nottingham's standards. Reliable Service Delivery: From installations to repairs and EICR Nottingham, we prioritise reliability and punctuality, ensuring seamless functionality for your property. Safety is Important: Committed to upholding the highest safety standards, our Electrician services and EICR checks guarantee peace of mind for Nottingham residents. Our Distinctive Offerings for Nottingham Residents: Comprehensive Electrician Services: Nottingham's lighting installations, complete rewiring projects, or Electrician-specific tasks are efficiently handled by our adept team. Designed Solutions for Nottingham: Recognising the unique needs of Nottingham customers, we offer customised solutions, including detailed EICR assessments specific to the region's requirements. Embracing Cutting-edge Technology: Keeping pace with Nottingham's technological advancements, we provide innovative solutions using the newest technology for Electrician services and comprehensive EICR evaluations. Transparent Pricing: No hidden costs or surprises. Our commitment is to transparent pricing for Electrician services and comprehensive EICR reports in Nottingham, ensuring the best value for your investment. Customer satisfaction is our priority at Legacy Electrical, and Nottingham residents trust us for all their Electrician and EICR needs. Whether it's a minor electrical repair or a major overhaul, Legacy Electrical is your trusted partner in Nottingham. Contact us today for Nottingham's premier Electrician service, including detailed EICR assessments, and let us illuminate your space with excellence. About Us: At Legacy Electrical, our expertise spans all facets of electrical work in Nottingham. From installing a new socket in your kitchen to designing, installing, and testing full-scale home rewiring projects, our friendly and knowledgeable team is here to advise and address any questions or concerns. We're pleased to provide a free, no-obligation quote, designed specifically for Electrician services and EICR assessments in Nottingham, Arnold, Beeston, Hucknall, West Bridgford and Woodthorpe.
business123
1,731,356
7 Alternatywy dla interfejsów API REST
Siedem alternatyw REST, które musisz znać i najczęstsze przypadki użycia, które wspierają ich użycie w terenie.
0
2024-01-16T14:22:18
https://dev.to/pubnub-pl/7-alternatywy-dla-interfejsow-api-rest-b28
Representational State Transfer (REST) to styl architektoniczny i protokół do tworzenia usług internetowych. Jest to popularne podejście do projektowania interfejsów programowania aplikacji internetowych (API), ponieważ kładzie nacisk na skalowalność, prostotę i możliwość modyfikacji. W przeciwieństwie do ścisłych ram, które regulują protokoły API, takie jak [Simple Object Access Protocol](https://www.pubnub.com/learn/glossary/what-is-soap-simple-object-access-protocol/) (SOAP) lub Extensible Markup Language remote procedure call (XML-RPC), protokoły REST były historycznie stosowane w celu uproszczenia procesu tworzenia API, ponieważ można je budować przy użyciu praktycznie dowolnego języka programowania i obsługiwać różne formaty danych. Jednak pojawienie się kilku alternatyw REST tworzy nowy punkt zapalny dla rozwoju API w ciągu następnej dekady. Trend ten obejmuje inne protokoły, wzorce i technologie, takie jak interfejsy API [sterowane zdarzeniami](https://www.pubnub.com/guides/event-driven-architecture/) , GraphQL i gRPC. W miarę jak protokoły te osiągają dojrzałość i szerszą akceptację, deweloperzy API będą musieli zrozumieć, jak najlepiej wdrożyć alternatywy REST na różnych platformach. Przeanalizujmy, co sprawia, że te alternatywy REST są tak popularne i jakie są najczęstsze przypadki użycia, które wspierają ich wykorzystanie w terenie. **Dlaczego alternatywy REST zyskują na popularności?** ------------------------------------------------------ [Interfejsy API RESTful](https://www.pubnub.com/guides/restful-apis/) są popularne, ponieważ są elastyczne, łatwe do zrozumienia i kompatybilne z dowolnym językiem programowania lub platformą obsługującą protokół[HTTP (](https://www.pubnub.com/guides/http/)Hypertext Transfer Protocol). Są one również dobrze przystosowane do budowania skalowalnych i rozproszonych systemów, ponieważ mogą wykorzystywać możliwości HTTP w zakresie buforowania i [równoważenia obciążenia](https://www.pubnub.com/guides/load-balancing/) . REST nadaje priorytet łatwej modyfikacji poprzez zasoby i jednolite identyfikatory zasobów (URI) reprezentujące dane. Oznacza to, że deweloperzy mogą zmieniać strukturę API bez naruszania istniejących aplikacji klienckich. Dlaczego więc deweloperzy mieliby używać czegoś innego? Istnieje kilka istotnych powodów: 1. **Rosnąca złożoność.** Interfejsy API REST zostały zaprojektowane w celu przezwyciężenia złożoności wcześniejszych protokołów API, takich jak SOAP, ale mogą stać się trudne w utrzymaniu wraz ze wzrostem liczby punktów końcowych i zasobów. Może to utrudniać programistom zrozumienie i modyfikowanie interfejsu API w miarę upływu czasu. 2. **Wydajność.** W niektórych przypadkach interfejsy API REST są skalowalne i mogą obsługiwać wiele żądań. Jednak w przypadku aplikacji działających w czasie rzeczywistym lub z małymi opóźnieniami mogą istnieć lepsze opcje, ponieważ polegają one na wielu żądaniach w obie strony w celu pobrania danych. 3. **Zmieniające się wymagania dotyczące danych.** Interfejsy API REST mogą wymagać znaczących zmian w celu obsługi nowych przypadków użycia lub struktur danych, co prowadzi do problemów z wersjonowaniem i kompatybilnością oraz zwiększa złożoność i czas programowania. 4. **Specyficzne wymagania dotyczące przypadków użycia.** Istnieją określone przypadki użycia, takie jak strumieniowe przesyłanie danych w czasie rzeczywistym lub [urządzenia Internetu rzeczy (IoT)](https://www.pubnub.com/solutions/iot-device-control/) o niskim poborze mocy (jak wspomniano powyżej), w których inne protokoły mogą być lepiej dostosowane niż REST. 5. **Preferencje deweloperów.** Deweloperzy mogą preferować korzystanie z alternatywnych protokołów, ponieważ są z nimi bardziej zaznajomieni lub oferują określone funkcje lub korzyści, których nie zapewnia REST. **Alternatywy dla interfejsów API REST** ---------------------------------------- Oto siedem alternatyw REST, które należy znać: ### **GraphQL** GraphQL to język uruchomieniowy i język zapytań dla interfejsów API, który pozwala klientom żądać i otrzymywać tylko te dane, których potrzebują, dzięki czemu jest bardziej wydajny niż REST. Dzięki GraphQL klienci mogą określić dokładne dane, których potrzebują i uzyskać je w jednym żądaniu, zamiast wysyłać wiele żądań do różnych punktów końcowych, jak w przypadku REST. Jest to świetny wybór dla aplikacji opartych na danych ze złożonymi i zmieniającymi się wymaganiami dotyczącymi danych. Może nie być najlepszym wyborem dla aplikacji, które wymagają ścisłej walidacji danych, aplikacji, które muszą obsługiwać szeroką gamę klientów lub aplikacji skierowanych do użytkowników, takich jak media społecznościowe. Jest to jednak doskonała alternatywa dla REST w sytuacjach wymagających elastycznego i wydajnego pobierania danych i manipulowania nimi. Jest to szczególnie prawdziwe w przypadku aplikacji ze złożonymi modelami danych lub aplikacji mobilnych o ograniczonej łączności. ### **gRPC** gRPC to framework open-source opracowany przez Google do tworzenia interfejsów API RPC. Pozwala programistom definiować interfejsy usług i generować kod klienta i serwera w wielu językach programowania. gRPC wykorzystuje bufory protokołów i niezależny od języka format serializacji danych w celu wydajnego przesyłania danych, dzięki czemu idealnie nadaje się do aplikacji o wysokiej wydajności. gRPC może nie być najlepszym wyborem dla dużej ilości manipulacji danymi lub dla aplikacji, które muszą obsługiwać szeroką gamę klientów. Niemniej jednak, gRPC znany jest z wysokiej wydajności i niskiego narzutu, co czyni go dobrym wyborem dla aplikacji wymagających szybkiej i wydajnej komunikacji między usługami. ### **WebSockets** Protokół [Protokół WebSocket](https://www.pubnub.com/guides/what-are-websockets-and-when-should-you-use-them/) umożliwia dwukierunkową komunikację w czasie rzeczywistym pomiędzy klientami i serwerami. W przeciwieństwie do REST, który jest oparty na żądaniach/odpowiedziach, WebSockets pozwalają serwerom na przesyłanie danych do klientów, gdy tylko staną się one dostępne, co czyni go idealnym rozwiązaniem dla aplikacji wymagających aktualizacji w czasie rzeczywistym, takich jak aplikacje czatu i gry online. WebSockets może nie być [najlepszym wyborem](https://www.pubnub.com/blog/websockets-alternatives/) dla aplikacji, które wymagają złożonej manipulacji danymi lub dla aplikacji, w których skalowalność jest problemem. Sprawdza się jednak tam, gdzie wymagana jest komunikacja w czasie rzeczywistym i niskie opóźnienia, dzięki trwałemu połączeniu między klientem a serwerem w trybie pełnego dupleksu. REST wykorzystuje mniej wydajny model żądanie/odpowiedź. ### **MQTT** [MQTT](https://www.pubnub.com/guides/mqtt/) to lekki protokół przesyłania wiadomości o otwartym kodzie źródłowym przeznaczony dla urządzeń IoT. Jest to [protokół pub/sub](https://www.pubnub.com/guides/everything-you-need-to-know-about-pub-sub/) o małym rozmiarze pakietu i niskiej przepustowości, dzięki czemu idealnie nadaje się do ograniczonych sieci i urządzeń o ograniczonej mocy obliczeniowej. MQTT może również obsługiwać przerywaną łączność sieciową i obsługuje poziomy jakości usług (QoS), aby zapewnić niezawodne dostarczanie wiadomości. Nie jest to najlepszy wybór dla złożonych interakcji lub aplikacji do manipulacji danymi. Jednak w przypadku zastosowań z niższą przepustowością i zachowaniem żywotności baterii - MQTT pozwala urządzeniom na "uśpienie" między wiadomościami, wydłużając żywotność baterii urządzeń IoT - oferuje doskonałe możliwości. ### **Architektura sterowana zdarzeniami (EDA)** W EDA zdarzenia wyzwalają i komunikują zmiany między różnymi komponentami lub usługami w systemie. Pozwala to na przetwarzanie danych w czasie rzeczywistym i reaktywnym oraz może zmniejszyć potrzebę wielokrotnego odpytywania zasobów, co może być zasobochłonne i czasochłonne w systemach opartych na REST. [EDA](https://www.pubnub.com/docs/general/basics/add-serverless-business-logic) jest dobrą alternatywą REST dla aplikacji, które wymagają przetwarzania danych w czasie rzeczywistym, skalowalności i luźnego połączenia między różnymi komponentami lub usługami w systemie. Dobrze nadaje się również do architektur mikrousług, umożliwiając każdej mikrousłudze działanie niezależnie i komunikowanie się z innymi usługami za pośrednictwem zdarzeń. Umożliwia to lepszą skalowalność, elastyczność i odporność całego systemu. ### **FALCOR** Firmy mogą również wprowadzać innowacje w rozwoju. FALCOR to biblioteka [JavaScript](https://www.pubnub.com/guides/javascript/) opracowana przez Netflix do tworzenia wydajnych i elastycznych interfejsów API. Wykorzystuje ona podejście "oparte na ścieżkach" do pobierania danych, reprezentując dane jako graf połączonych ze sobą ścieżek, a nie poszczególne zasoby dostępne za pośrednictwem żądań HTTP. Oferuje takie korzyści jak: - Obsługa WebSocket, która umożliwia przesyłanie aktualizacji danych w czasie rzeczywistym do klientów bez konieczności wielokrotnego odpytywania. - Deklaratywne pobieranie danych, gdzie klient określa dane, których potrzebuje, a API odpowiada żądanymi danymi. Upraszcza to kod [po stronie klienta](https://www.pubnub.com/blog/chat-application-architecture-explained/) i zmniejsza ilość danych przesyłanych przez sieć. Nazwa "FALCOR" nie oznacza niczego. Została wybrana przez programistów Netflix, aby reprezentować podejście biblioteki do pobierania danych. Jest ona inspirowana postacią o tym samym imieniu z filmu "Niekończąca się opowieść" z lat 80-tych, stworzeniem podobnym do smoka, które może podróżować przez różne wymiary i światy, podobnie jak zdolność FALCOR do poruszania się po złożonych wykresach danych. ### **Funkcje** PubNub's [Funkcje](https://www.pubnub.com/products/functions/) są programami obsługi zdarzeń JavaScript, które mogą być wykonywane na wiadomościach PubNub w tranzycie lub w stylu żądanie/odpowiedź RESTful API przez HTTPS. Dla tych, którzy są zaznajomieni z Node.js i Express, rozwój obsługi zdarzeń na żądanie jest bardzo podobny. Funkcje mogą wdrażać nowe lub dodatkowe funkcje w czasie rzeczywistym, takie jak moderacja czatu i tłumaczenie językowe. Uruchom swój kod w naszej sieci lub wykorzystaj nasze istniejące integracje, aby przekształcać, przekierowywać, rozszerzać, filtrować i agregować wiadomości w celu wykrywania i blokowania spamerów, mierzenia średnich i nie tylko. Cały kod jest uruchamiany na brzegu sieci, co zapewnia niskie opóźnienia i jest na tyle solidny, że nie trzeba budować własnej infrastruktury. **Ułatwianie innowacji dzięki jednej alternatywie REST na raz** --------------------------------------------------------------- Alternatywy REST zyskują na popularności ze względu na ich zdolność do rozwiązywania wyzwań stojących przed interfejsami API REST. Każda alternatywa ma swoje unikalne mocne strony i przypadki użycia. Wybierając alternatywę REST, należy wziąć pod uwagę konkretne potrzeby i wymagania aplikacji - niezależnie od tego, czy jest to skala [wydarzeń na](https://www.pubnub.com/solutions/live-events/) żywo, czy możliwości w czasie rzeczywistym dla [Web3](https://www.pubnub.com/solutions/web3/) - a także mocne strony i ograniczenia każdej alternatywy REST. Ostatecznie, badając te alternatywy, programiści mogą tworzyć bardziej wydajne i skuteczne interfejsy API, które spełniają potrzeby nowoczesnych aplikacji. Spraw, by Twoja aplikacja ożyła. [Skontaktuj się z nami](https://www.pubnub.com/company/contact-sales/?&utm_source=clearvoice&utm_medium=blog&utm_campaign=contact_sales_cta) aby omówić swój projekt czasu rzeczywistego lub, jeszcze lepiej, [zobaczyć go w akcji dla siebie](https://admin.pubnub.com/#/register). Jak PubNub może ci pomóc? ========================= Ten artykuł został pierwotnie opublikowany na [PubNub.com](https://www.pubnub.com/blog/7-alternatives-to-rest-apis/) Nasza platforma pomaga programistom tworzyć, dostarczać i zarządzać interaktywnością w czasie rzeczywistym dla aplikacji internetowych, aplikacji mobilnych i urządzeń IoT. Fundamentem naszej platformy jest największa w branży i najbardziej skalowalna sieć przesyłania wiadomości w czasie rzeczywistym. Dzięki ponad 15 punktom obecności na całym świecie obsługującym 800 milionów aktywnych użytkowników miesięcznie i niezawodności na poziomie 99,999%, nigdy nie będziesz musiał martwić się o przestoje, limity współbieżności lub jakiekolwiek opóźnienia spowodowane skokami ruchu. Poznaj PubNub ------------- Sprawdź [Live Tour](https://www.pubnub.com/tour/introduction/), aby zrozumieć podstawowe koncepcje każdej aplikacji opartej na PubNub w mniej niż 5 minut. Rozpocznij konfigurację ----------------------- Załóż [konto](https://admin.pubnub.com/signup/) PubNub, aby uzyskać natychmiastowy i bezpłatny dostęp do kluczy PubNub. Rozpocznij ---------- [Dokumenty](https://www.pubnub.com/docs) PubNub pozwolą Ci rozpocząć pracę, niezależnie od przypadku użycia lub [zestawu SDK](https://www.pubnub.com/docs).
pubnubdevrel
1,731,386
Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates
If your Windows Server 2012 servers that are Arc-enabled to receive their Extended Security Updates...
0
2024-01-16T15:08:41
https://www.techielass.com/troubleshoot-windows-server-2012-arc-enabled-servers-not-receiving-updates/
azure, windows
![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://www.techielass.com/content/images/2024/01/Troubleshoot-Windows-Server-2012-Arc-enabled-servers-not-receiving-updates.png) If your Windows Server 2012 servers that are [Arc-enabled](https://www.techielass.com/tags/azure-arc) to receive their Extended Security Updates (ESU) aren’t receiving the updates there are some steps you can take to try and troubleshoot that issue. This blog post will explore the steps needed to troubleshoot the issue. ## Step 1 - Check the Azure Portal The first step is to check the Azure portal to understand if the server has been assigned an ESU licence. Head to [<u>https://portal.azure.com</u>](https://portal.azure.com/?ref=techielass.com) Launch the Azure Arc blade. Click on **Machines** down the left-hand side. ![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://lh7-us.googleusercontent.com/xAOBNw9NF59XdP9Z_qrLg87llRWwv5u_l6zfVchnIs2Bq6-yem0coX0-iXNeR7r_s40t5edl4UQQ66JlH5e7Cf2xXceRCv3V21xno82R1ysBJpS8NqN3x9JkkaJ0niMt_T3xjGqmLb8btIw9FA6zXZ8) _Azure Arc blade in the Azure portal_ Find the server that isn’t receiving updates. Click on the server name. When the server information loads, check under capabilities to ensure ESU is **Enabled**. ![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://lh7-us.googleusercontent.com/M_946o1BhQrKhocwreyu7iOBLVy_7Uyaa7tmIlL91yNA4b9DrSa9_qaXRyH-avUMnCAI6V-bUnqttmOqm8hEmNgvaVsYERg5Y9OHyDuduYzAQTbLj1ds6lS_tWO5_HsRzWq-NnuVDA2doR2SvVR7GYY) _Azure Arc machine status_ If the server states that the ESU capability is **Not Enabled** , assign the correct ESU license to the server to enable it to receive updates. ![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://lh7-us.googleusercontent.com/UB200Yzt6VwvsG6x2r-ecDxzgnZsWhyNYv5OwaXsfppkhxN-yVAS84--9LrnhAfMQ5K-VXka60HKYgaIYxzy8QrCxFGl-mQjS4SQaWL3DTagRnxLMdz9tOej1NAKUtqlIjiR6o58uOL2rAk5ObVCJFM) _Azure Arc machine status_ If the server states that the ESU capability is **Enabled, move** on to the next troubleshooting step. ## Step 2 - Check the Azure Arc version installed The troubleshooting step to check is what version of the Azure Arc agent is installed. The ESU capability was enabled on [<u>version 1.34</u>](https://learn.microsoft.com/azure/azure-arc/servers/agent-release-notes?ref=techielass.com#version-134---september-2023) of the Azure Arc agent. That version or above needs to be installed on the server. To check the version of the Arc agent log onto the affected server. Launch a PowerShell command terminal and type in the following command: ``` azcmagent version ``` ![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://lh7-us.googleusercontent.com/pIqHy0jVszF5WNMINtk6cIqEXquHaYnXRO-4dlyssX5B_h765BrhjvaN3lvBdYRxedBhGsmfvzarQ9ODk7c_AlA5uK7R14YdyB7ePeOZ61A4FuFLZ8AJgPmCmXVWAXfPR-dGbOTPIMYbb3Izxvs8f94) _Check Azure Arc version_ If the agent is below version 1.34, [<u>follow the upgrade processes</u>](https://www.techielass.com/updating-the-azure-arc-agent-connected-machine-agent/) to bring the agent to a higher level. If the agent is level 1.34 or above move on to step 3. ## Step 3 - Check the status of the Azure Arc agent The next step is to ensure the Azure Arc agent is connected and working correctly as expected. Launch a PowerShell command terminal and type in the following command: ``` azcmagent show ``` You are looking for two key pieces of information. The first one is the **Agent status** and **Agent Last Heartbeat.** They should state Connected and list a time or date close to your current time and date. The second piece of information you are looking for is the **Extended Security Updates Status**. That should read as active. ![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://lh7-us.googleusercontent.com/jI6okcAuZGfIEE0y-1vc7KgCKESIXMbEhepfFW1semcJXVUDpItrAoz-641R39S_3S3YNHQ-ACAOkekZJIjx972XVEfGFoCKNrFrrHDyqiXNDocGQXFxOmZWyy15gsQPiTRYH3hZEBUxJIcSBnNbqYI) _Azure Arc agent status_ If these areas report as connected and active, then move to troubleshooting step 4. If the areas reported something else, please go through the Azure Arc agent pre-requisite requirements to ensure the correct networking and firewall rules etc are in place as required. Also, check if the appropriate ESU license is assigned and active. ## Step 4 - Check the registry On the server, we want to confirm the registry setting is configured as it should. To do this click on the **Windows icon** , then search for **regedit.** Check registry key: **[HKEY\_LOCAL\_MACHINE\SOFTWARE\Microsoft\Azure Connected Machine Agent\ArcESU] "Enabled”** ![Troubleshoot Windows Server 2012 Arc-enabled servers not receiving updates](https://lh7-us.googleusercontent.com/jVvsgKStJ-vGa5tmKATty9w-ywhXiGnzPyPw9z_LGmpBUCvm9LYe0NMKkNghGF7ASWPGWRZSDC70L3dTtVG2zbms6MdT6IOQG1_xDrpftldkUdJMF4YBNOhvb-6ObzZ1ujKI7dvBdZGdX0xMCz4C8tA) _Azure Arc registry settings_ A value of 1 means that the machine can receive the latest Extended Security Update patches. 0 means the server is not enabled for Arc-based ESUs and won’t receive ESUs via that route. ## Step 5 - Check patches The server needs to be up to date with several patches installed, these are patches from previous updates, so if you’ve been keeping your servers up to these should already be there. However, it’s worth checking to ensure they have been installed. - For _Windows Server 2012 R2_, you must have the servicing stack update (SSU) ([<u>KB5029368</u>](https://support.microsoft.com/help/5029368?ref=techielass.com)) that is dated August 8, 2023 or a later SSU installed. Also ensure the Extended Security Updates (ESU) Licensing Preparation Package dated August 10, 2022 ([<u>KB5017220</u>](https://support.microsoft.com/help/5017220?ref=techielass.com)) is installed. - For _Windows Server 2012_, you must have the servicing stack update (SSU) ([<u>KB5029369</u>](https://support.microsoft.com/help/5029369?ref=techielass.com)) that is dated August 8, 2023 or a later SSU installed. Also ensure the Extended Security Updates (ESU) Licensing Preparation Package dated August 10, 2022 ([<u>KB5017221</u>](https://support.microsoft.com/help/5017221?ref=techielass.com)) is installed. ## The issue is still there If you’ve checked through all of these steps and you still have an issue my next suggestions would be: - Investigate your method of applying updates, for example, is the connection between this server and the WSUS server working correctly? - Uninstall and unregistering the server within Azure Arc and starting again from scratch might be a good idea, in case there is some kind of configuration that is blocking the updates from being applied. - Log a support ticket with Azure support to offer advice. ## Conclusion By following these comprehensive troubleshooting steps, you can proactively address any challenges in the ESU update process, thereby enhancing the security posture of your Windows Server 2012 environments. Regularly monitoring and maintaining the ESU updates will contribute to a robust and resilient infrastructure, safeguarding your systems against potential vulnerabilities.
techielass
1,731,420
Decoding the Myth of 'Junior' in DevOps and SRE: Navigating Challenges and Cultivating Expertise
In my view, assigning roles such as 'Junior DevOps' and 'Junior SRE (Site Reliability Engineer)'...
0
2024-01-16T15:38:01
https://dev.to/edersonbrilhante/decoding-the-myth-of-junior-in-devops-and-sre-navigating-challenges-and-cultivating-expertise-4bmk
beginners, devops, discuss, career
In my view, assigning roles such as **'Junior DevOps'** and **'Junior SRE (Site Reliability Engineer)'** seems impractical, reminiscent of labeling someone an **'Entry-Level Software Architect.'** ## Navigating the intricate landscape Navigating the intricate landscape of DevOps and SRE demands proficiency in **coding, networking, cloud technologies, security, and system administration.** Envisioning someone with limited experience adeptly maneuvering through this multifaceted skill set poses a significant challenge. ## The Software Architect analogy Similarly, giving the title **"Software Architect"** to beginners doesn't align with the intricate demands of the role. Crafting sophisticated software solutions requires years of practical experience, involving intricate system design and understanding. Expecting a junior engineer to architect and implement a secure, scalable microservices architecture without in-depth knowledge and experience in the design principles of distributed systems is unrealistic. ## Quantity vs. Experience fallacy Furthermore, the belief that numerous junior roles collectively can achieve the same level of effectiveness as a seasoned professional echoes the fallacy of favoring **quantity over experience.** While each junior role contributes to the team's growth, the efficiency and strategic thinking of an experienced architect often outpace the combined efforts of multiple entry-level professionals. ## Pressure on companies In addition, the pressure on companies to leverage the benefits of DevOps and SRE roles within their organization often stems from the growing need for seamless integration between development and operations. Individuals in these positions are expected to possess a profound understanding of both coding and operations, creating a unique blend of skills. Unfortunately, finding professionals who embody this multidisciplinary expertise is a formidable challenge. Those who can seamlessly bridge the gap between traditional sysadmins and developers are not only rare but also come at a premium, given the scarcity of individuals with such comprehensive skills in the overall job market. ## Scarcity leading to desperation This scarcity sometimes leads companies to consider entry-level candidates, hoping to quickly train them to fill the void. However, the complex nature of the disciplines touched upon by DevOps and SRE roles means that becoming proficient in each area takes **years of hands-on experience.** The high demand and limited supply of individuals with these multifaceted skills contribute to the desperation companies feel in recruiting for these roles. ## Acknowledging the shortage Acknowledging this shortage is crucial, especially as it extends beyond DevOps and SRE roles to other senior positions. Over the past two decades, the industry has witnessed a trend of companies poaching professionals from one another rather than investing in training new talents. This cycle has created a snowball effect, further exacerbating the shortage of skilled individuals. ## Solution: Attracting seasoned developers A potential solution lies in attracting seasoned developers with a penchant for infrastructure and operations to transition into roles in DevOps and SRE. These individuals often bring a wealth of experience, having naturally acquired knowledge in areas beyond coding, such as security, infrastructure, databases, and operations. Their diverse skill set aligns with the demands of contemporary senior developers who are expected to possess expertise beyond language-specific coding skills. By encouraging such transitions, companies can tap into a pool of experienced professionals and mitigate the challenges associated with the scarcity of multidisciplinary talent in the market. ## Recommended pathway for aspiring professionals For aspiring professionals entering the tech industry, a recommended pathway involves starting as a developer before venturing into the multifaceted realms of DevOps and SRE. Beginning as a developer allows individuals to hone their coding skills and gain a solid foundation in software engineering principles. As they accumulate experience and familiarity with the development lifecycle, they can then gradually navigate towards operations, infrastructure, and other related disciplines. This gradual journey not only provides a comprehensive understanding of the intricacies of both coding and operations but also allows individuals to develop a deeper appreciation for the challenges addressed by DevOps and SRE roles. This approach acknowledges the value of hands-on experience and ensures that individuals entering these dynamic fields are well-equipped to contribute meaningfully to the integration of development and operations within an organization.
edersonbrilhante
1,731,425
Desvendando a Certificação Amazon Practitioner 2024: Um Relato e Dicas de Estudo
Um pouco sobre mim: Meu nome é Hudson, sou desenvolvedor frontend e Mobile na OPEN...
0
2024-01-19T03:41:00
https://dev.to/hudson3384/desvendando-a-certificacao-amazon-practitioner-2024-um-relato-e-dicas-de-estudo-b1m
aws, programming, certification, linux
## Um pouco sobre mim: Meu nome é Hudson, sou desenvolvedor frontend e Mobile na OPEN Datacenter, e busco compartilhar conhecimentos sobre minha área e aquelas em que tenho interesse em atuar. Estudo áreas como backend, DevOps, Linux e automações. ## O que é o Amazon Practitioner: O Amazon Practitioner é uma certificação de entrada que examina seus conhecimentos na nuvem da AWS. Ela foca em conceitos e é recomendada até para pessoas que trabalham com Cloud indiretamente, como marketing, vendedores e outros. ## Minha experiência: A prova foi feita pela plataforma da Pearson Vue. Antes de fazer a prova, você tem acesso a um executável demo que será idêntico ao dia da prova. Esse executável valida softwares em seu computador que podem impedir a conclusão da prova (permitido apenas em Mac ou Windows), além de validar microfone, câmera e autofalante, necessários para iniciar a prova. Ao chegar o horário da prova, você deve ter baixado o executável enviado posteriormente ao teste e seguir as instruções. A primeira instrução será enviar uma foto de selfie e do seu documento. Em seguida, uma foto do seu documento (é necessário tê-lo para liberarem a prova). Depois, você deve tirar fotos dos quatro lados da sua mesa de computador. Essas fotos podem ser enviadas pelo computador ou por uma página web acessível via QR code para acesso pelo celular. ### Após essa etapa, entra o examinador: Na hora de marcar a prova, você define o idioma do examinador. Atualmente, estão disponíveis apenas em inglês e espanhol. Com a webcam, ela vai pedir para olhar todas as direções que foram solicitadas no passo anterior e, caso encontre alguma irregularidade, vai pedir para você guardar e mostrar, guardando o item. Neste passo, seu celular já não deve estar visível para o examinador para agilidade do processo. ### Finalmente, a prova: São 65 questões de múltipla escolha em 90 minutos, com múltiplos pesos e grau de complexidade não declarados, focados em quatro domínios. ## Meus resultados: Na PearsonVue, você pode revisar todas as questões ao final da prova. Além disso, ele alerta quais perguntas não foram respondidas e permite que você marque questões para revisar a resposta ao final também. ## Finalização: Após o final da prova, eles realizam 10 perguntas sobre a plataforma e nível de satisfação. Após isso, você já tem o resultado. Após 5 dias, chega um e-mail confirmando o resultado. ## Principais conteúdos: Apesar de eu ter realizado a prova em 2023, estudei com conteúdos de anos anteriores, e realmente não muda muita coisa. Os resultados são esses: 1. **IAM (Identity and Access Management):** - Gestão segura de acessos aos serviços e recursos na AWS. 2. **EC2 (Elastic Compute Cloud):** - Execução de máquinas virtuais escaláveis na nuvem AWS. 3. **S3 (Simple Storage Service):** - Armazenamento e recuperação de dados escaláveis e duráveis. 4. **RDS (Relational Database Service):** - Implantação e gerenciamento de bancos de dados relacionais na AWS. 5. **VPC (Virtual Private Cloud):** - Criação de redes isoladas e personalizadas na nuvem. 6. **Lambda:** - Execução de código sem a necessidade de provisionar ou gerenciar servidores. 7. **Route 53:** - Registro e gerenciamento de nomes de domínio e serviços DNS. 8. **CloudWatch:** - Monitoramento de recursos e aplicativos na AWS. 9. **Elastic Load Balancing:** - Distribuição de tráfego entre instâncias EC2 para garantir a disponibilidade. 10. **DynamoDB:** - Armazenamento de dados NoSQL com desempenho e escalabilidade. 11. **Modelos de Precificação:** - Compreensão dos modelos de precificação e otimização de custos na AWS. 12. **AWS Well-Architected Framework:** - Implementação de arquiteturas seguras, eficientes e de alto desempenho. ## Materiais de estudo: - O curso da AWS SkillBuilder - Cloud Foundations foi útil, apesar de ser mais geral. [Link para o curso da SkillBuilder](https://explore.skillbuilder.aws/learn/course/8287/play/93778/elementos-essenciais-do-aws-cloud-practitioner-portugues-aws-cloud-practitioner-essentials-portuguese-na) - O curso da Alura abordou assuntos interessantes que complementaram o da SkillBuilder. [Link para o curso da Alura](https://www.alura.com.br/formacao-aws-certified-cloud-practitioner) - O curso da AWS Academy apresenta um conteúdo mais denso e foi útil na preparação para a prova. [Link para os cursos da AWS Academy](https://aws.amazon.com/pt/training/awsacademy/) - Utilizei o aplicativo Anki para memorização dos tópicos. Certificado e Links: - [Certificado AWS Cloud Practitioner](https://www.credly.com/badges/c4b778eb-951e-4959-b17f-3f0d39d98a28/public_url) - [Deck de Estudo no Anki](https://ankiweb.net/shared/info/376560885?cb=1693682891945) Para mais informações ou dúvidas, estou à disposição nos comentários. Se gostou, por favor, avalie para me motivar a criar mais conteúdos assim. ## Redes Sociais: - Github: [https://github.com/Hudson3384](https://github.com/Hudson3384) - Linkedin: [https://www.linkedin.com/in/hudson-arruda-ribeiro/](https://www.linkedin.com/in/hudson-arruda-ribeiro/) - LeetCode: [https://leetcode.com/Hudson3384/](https://leetcode.com/Hudson3384/)
hudson3384
1,731,452
INTRODUCTION TO DEVREL
The practice of developing a mutually advantageous connection between software engineers and...
0
2024-01-16T16:11:54
https://dev.to/mgorretti/introduction-to-devrel-53gg
The practice of developing a mutually advantageous connection between software engineers and organizations is known as developer relations, or DevRel. Put another way, it's a set of techniques and approaches that assist businesses in improving their collaboration with software engineers. What developer relations teams do depends on the demands of their respective organizations. The majority of businesses fund DevRel with the intention of: Adoption: More developers using the product is what the organization wants. Product development: To create their (perhaps open source) technologies, the company depends on a developer community. Product-market fit: The success of the product depends on comprehending the demands and preferences of developers. Developer enablement is the process of giving developers the knowledge, resources, and infrastructure they need to use a product once their organization has approved it. Developer perception: The company believes that the way developers now see their product could be a hindrance to its success. Hiring: To attract more of the developers they need to hire, the company wants to strengthen its employer brand. four complimentary DevRel domains: Developer marketing involves identifying a product's target developer base and ensuring they have the knowledge and resources necessary to make an informed choice. Enabling developers with all the tools they need to succeed with the product is known as developer enablement. Being an intermediary between developers and the organization and a champion for them is known as developer advocacy. Developing and upholding a continuous procedure wherein developers can work toward a shared objective about your product or company is known as the developer community.
mgorretti
1,731,661
Showcase on LinkedIn
Would you like to showcase your favorite piece of work on linkedIn ? Have a look at the below video...
0
2024-01-16T19:49:51
https://dev.to/tanmaybanerjee/showcase-on-linkedin-21k2
linkedin, showcase, skills
Would you like to showcase your favorite piece of work on linkedIn ? Have a look at the below video and support for more such info! [](https://www.youtube.com/watch?v=MQiypu7kAPE)
tanmaybanerjee
1,731,688
Your Best Guide to Becoming a Developer Advocate
Developer Advocate, Developer Success, Developer Experience, Developer Evangelist, Developer...
0
2024-01-16T22:04:24
https://dev.to/arshadayvid/your-best-guide-to-becoming-a-developer-advocate-1ij3
devrel, discuss, beginners, tutorial
Developer Advocate, Developer Success, Developer Experience, Developer Evangelist, Developer Community Manager, Developer Programs Engineer, etc, are relatively new job titles that have been flying around within the tech ecosystem for some time now. What exactly do these job titles mean? Do they all mean the same thing? Well, you'll find out in this tutorial. I'll walk you through what Developer Relations (DevRel) means, what they do, and a few resources that can help you become a Developer Relations professional. ## What is DevRel? DevRel, an acronym for Developer Relations, involves building and maintaining relationships between a company and its users (developers). DevRel is mostly used by companies whose primary users or customers are developers, and DevRel professionals ensure that the developers (users) have a smooth experience using the company's product. *** ## Responsibilities of a DevRel professional DevRel involves a bunch of tasks that can change depending on the company's products and size. In large companies, the DevRel team might have different parts: one for making content and documentation, one for events, and another for the developer community. In small companies or startups, a tiny DevRel team does it all—goes to events, makes content and guides, manages the developer community, and more. Let's check out some of the things a DevRel team does. ### Community Management One of the responsibilities of a DevRel professional is to build and manage the developer community. A DevRel professional is responsible for interacting with developers, gathering feedback, answering questions, and organizing events or workshops to explain new features of the company's product and how to use it efficiently. A DevRel professional may also serve as the community moderator, sharing major updates or releases with the developer community and promoting open communication and feedback about the product. This helps the company make informed decisions. ### Organizing events and interactive sessions The primary role of a DevRel professional is to ensure a seamless experience for developers using the product. This involves creating quick start guides, app demos, and real-world use-cases that demonstrate how to effectively utilize the company's product, thereby enhancing the overall developer experience. DevRels also play a key role in organizing events, such as meetups, webinars, and hackathons. Through these events, they engage with developers, answer questions about their products, observe the problems people are solving, and put faces to the names within their community channels. ### Content Creation DevRel professionals often take on the role of Content Creators for the company. As they directly engage with the developer community and collect feedback, they become valuable sources of insights. They can share this feedback with the team, collaborating to determine the most effective content strategy. DevRel professionals may create content in video or blog post formats to address frequently asked questions within the community or demonstrate how to perform various tasks related to the company's products. ### Attending and giving talks at tech conferences While DevRels are responsible for organizing events and interacting with community members, their first task is to attract users (developers) to use their products. One effective way to achieve this is by attending tech conferences within their niche. At these conferences, they evangelize their products, highlighting the problems they solve and providing insights on how developers can use them. This significantly contributes to increasing product visibility and acquiring more users. During these conferences, DevRels have the opportunity to engage with founders and experienced developers. They can exchange ideas on optimizing product usage and gather valuable feedback that benefits the company. ### Writing technical and API documentation One of the key roles of DevRel professionals is to simplify technical concepts. They work closely with the Engineering team, gaining a thorough understanding of how the products work and the best practices for their usage. Then, they are tasked with creating clear and engaging documentation about APIs and various software tools based on the company's needs. DevRel professionals also create product demos and how-to guides, enabling existing and potential users to visualize the product's functionality. *** ## Resources that can help you become a DevRel professional There are numerous resources available to help you become a DevRel professional. However, I'll share a few with you: - [DevRel Resources](https://devrelresourc.es/) It contains numerous resources that can help you start and build your skills as a DevRel professional. - [Awesome DevRel](https://github.com/devrelcollective/awesome-devrel) It contains a list of tools and tips used by DevRel professionals to carry out their daily activities. - [DevRel Careers](https://devrelcareers.com/) It is a job board that features daily DevRel opportunities, connecting employers with professionals. - [DX Mentorship DevRel Resources](https://github.com/kenny-io/DevRel-Resources) This is a GitHub repository containing articles, tools, videos, and events that can help you kickstart your career in Developer Relations and grow as a professional. - [DX Mentorship Program (Free)](https://dxmentorship.com/) This is a 3-month free mentorship program designed to help you kickstart your Developer Advocacy career. It involves interactive sessions with experts, hands-on practical tests, and networking opportunities with fellow learners. > _🚨 PS: This article is my second task in the mentorship program. I'm looking forward to learning a lot within the next few months._ *** ## Wrap Up Developer Relations encompasses a variety of activities that vary depending on the company's products and size, which is why we have different names for it. If you're a Developer Advocate, feel free to share the activities you perform at your company and any helpful resources for beginners in the comment section. Thank you for reading!
arshadayvid
1,731,706
AdQure Copywriting Services (ACS)
AdQure offers marketing solutions that empower service providers and small- to medium-sized business...
0
2024-01-16T21:03:07
https://dev.to/adqure/adqure-copywriting-services-acs-d2b
webdev, marketing, advertising, seo
AdQure offers marketing solutions that empower service providers and small- to medium-sized business owners to establish themselves as leaders in their respective industries and to grow their internet presence.
adqure
1,731,828
9 ways to improve how you ship software
10x developers are real, and there’s more than one way to be a 10x developer. Perhaps the most...
0
2024-01-16T23:58:12
https://www.flightcontrol.dev/blog/9-ways-to-improve-how-you-ship-software
webdev, devops, programming, engineering
10x developers are real, and there’s more than one way to be a 10x developer. Perhaps the most approachable is to make 10 other people 10% more productive. In this article, I cover things to improve your individual and team productivity. This is valuable to your company, so it will likely help level up your career. First, we’ll cover 7 principles and then 9 tactics. ## 7 principles for shipping software ### 1. Speed Let’s face it. As developers, we are speed junkies. If we find a tool that’s 10 ms faster, we’re inclined to immediately rewrite our entire application. But when it comes to how often we deploy our code, we tend to be much more cautious. We’d rather deploy tomorrow or next week, so we don’t have to stay late today to fix whatever problems might happen. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/h1ni0flrjnj0h11veq29.png) Like Charity says, it is a deep seated biological impulse to slow down when we feel cautious. For humans, slow is safety. But software physics is different. It's more like riding a bicycle. The slower you go, the more dangerously you wobble. #### Speed is safety. Small is speed. When it comes to deploys **speed is safety, and small is speed**. The speed of shipping has massive compounding effects. If your deploy takes hours, you’re going to want to make damn sure there are no bugs, because if you find a critical bug right after a deploy goes live, then you have to wait another couple hours for your fix to go live. We must aim for speed in every part of the software lifecycle. - Fast feedback loops - Fast builds - Fast tests - Fast deploys - Fast fixes - Fast rollbacks ### 2. Reliability The second principle is reliability. Our systems must be dependable. Flaky CI/CD issues have to be one of the worst possible things in the universe. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/anu918gftlcr5rqnp7e6.png) We must avoid flakiness at all costs. ### 3. Reproducibility In order to build and maintain complex systems, we need to know that if we do X, we get Y every time. If one time you do X, you get Y, but another time you get Z, you will lose confidence in your systems. Basically, we want to use scripts and code to control everything we do instead of manual clicking, manual commands, etc. ### 4. Calm on-call rotations Somewhat self-explanatory, no one wants to dread on-call rotations. If they are hectic, it will degrade morale and cause all sorts of issues. So it’s something we should explicitly prioritize. It can serve as a backstop of sorts to make sure what we’re doing is truly working well. ### 5. Easy to debug Everyone causes bugs. Even seniors. Even when you have tests. Even when you have 100% test coverage. You will ship bugs to production. So you must be prepared to fix production bugs. This is an area where speed is important. And as part of that, it must be easy to debug. If it’s hard to debug and slow to deploy fixes, it’s going to slow you down because you are going to add lengthy QA processes, and before you know it, you’ll be deploying code only once every few weeks. But even with lots of QA, you’ll still have production bugs. Production is the only tried-and-true way to find all the bugs. Our systems must be easy to debug. ### 6. Self-serve infrastructure and deployments In the old way of doing infrastructure and code deployments you had two separate teams. Devs and Ops. If devs needed a new database, they filed a ticket with ops. If they need a new service, file a ticket with ops. Need to deploy your code? File a ticket with ops. Naturally, this has several problems: - Devs are often blocked, waiting on ops to get to some ticket. - Devs aren’t monitoring infrastructure, and likely won’t get feedback on whether their code is fast or slow, efficient or inefficient. - The whole organization is slowed down. We, as an industry, figured out it’s much better for devs to deploy and run their own code. This is more efficient, with fewer blockages and less communication overhead, and devs get more feedback from the real world on how their code is performing. ### 7. Ship on Fridays ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/hc0493nwokwt1b7rwae7.jpg) Healthy engineering organizations deploy multiple times per day, including on Friday! If you are afraid to deploy on Friday, it’s likely from one or more of the following: - You don’t have reliable deploys. - You have slow deploys. - You don’t have good monitoring and alerting. - Your app is hard to debug. - You don’t have tests. Those are all things you must fix. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/zipetdewfweobj9ur22i.png) ## 9 tactics to improve how you ship software With the above principles in place, let’s look at 9 specific tactics to implement them in practice. ### 1. Decouple deploys from releases > Deploying software is a terrible, horrible, no good, very bad way to go about the process of changing user-facing code. \ > —[Charity Majors](https://www.honeycomb.io/blog/deploys-wrong-way-change-user-experience) Fundamentally, there are two possible actions for changing code in production: deploys and releases. *Releasing* refers to the process of changing the user experience in a meaningful way. *Deploying* refers to the process of building, testing, and rolling out changes to your production software. The traditional process is that you change the code on a branch, and when it’s ready, you merge and deploy it. Once it’s deployed, users will see the new code. But today’s modern engineering organizations use feature flags. What is a feature flag? ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/dtogzpmcses9gnefpdhk.png) It allows you to write new code alongside the old code, and then, based on a runtime feature flag, your app decides whether to run the old or new code. This is huge! Remember how speed is one of the key principles? Separating deploys from releases speeds up both. It speeds up deploys because engineers can deploy their code as soon as it’s ready without waiting for product management to be ready to release it. And it speeds up releases because now you can release a feature instantly. And rollback a feature instantly. Without needing a deploy. Product managers can do releases on their own schedule. Furthermore, it unlocks advanced methods like progressive rollout, where you enable the feature flag for a percentage of your users. You could start out with 5% of users. If everything is working, then increase to 10%, 20%, etc. This is the best way to roll out changes at scale because if there are problems, only a subset of your users will experience them instead of all. And feature flags are superior to long-lived feature branches for development flow. You have fewer merge conflicts, less stale code, more team momentum, better team morale, and more flexibility for testing things with real users. I encourage you to introduce this idea at work. And next time someone has a long-lived feature branch instead of a feature flag, use this: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/7ido3r0pc63pt62oxape.jpg) ### 2. Continuous deployment (CD) Deployments are the heartbeat of your engineering organization. And like a human heart, you want a steady rhythm of simple heartbeats. And with this steady rhythm of deployments, you get momentum. And momentum is life for organizations. Along with momentum, you get efficiency, being able to deliver code to production as quickly as possible after it’s written. You want to deploy as often as possible, like multiple times per day. Every commit should be deployed. Small, understandable changes mean you have debuggable and understandable deploys. The engineer who writes the code should merge their own PR and then monitor production for any unexpected issues. Feature flags are basically a prerequisite for this because they allow your team to ship small changes continuously instead of having long-lived feature branches. ### 3. Set up alerts Make sure you set up alert notifications for the following anomalies: - Failed builds - Failed deploys - Service downtime - Unhealthy servers - Unexpected errors - Unusual amount of traffic - Statuses of third-party services (many have a status page that you can subscribe to in Slack) The benefit is relatively obvious, but it does take at least a bit of effort to set them up, so this is something many teams can do as a quick win. And if you already have alerts, you should audit them: - Are they useful? - Are you getting too many false alerts so that you ignore them? - Are there things you should add new alerts for? ### 4. Optimize deploy speed — 15 minutes or bust Your target for a happy team is 15 minute deploys or less. Of course, we want them to be as fast as possible, but the neighborhood of 15 minutes is the baseline target. Significantly longer than this is significantly painful. But significantly faster than this could take more work than it’s worth, depending on your circumstances. There are a number of ways to improve your speed of deploys. #### a. Track and measure your current deploy times You want to understand your starting point and which parts of the process are taking a long time. That’s where you want to focus initially. #### b. Speed up dependency installation Dependency installation is often time consuming, so it can be a good first step. **Switch to a faster package manager.** If using javascript, switch to pnpm. It’s a drop-in replacement for npm and yarn that has significant performance and caching improvements. **Cache the entire install step.** Use your build system to cache this entire step, so this step doesn’t even run if your dependencies haven’t changed. Example with Docker: ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/fansmfpbo7wmc85sp6u3.png) Example with [Nixpacks](https://nixpacks.com): ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/33kzuvz53it4uw7x1rnx.png) #### c. Improve your build speed The main way to improve your build speed is to switch to a faster bundler. For example if you are using create-react-app which uses webpack, you can **switch to Vite** which is way faster. **If using Next.js**, upgrade to the latest and make sure you don’t have a `.babelrc` file; this will use SWC instead of babel which is a lot faster. If you like living on the edge, you can try Turbopack by passing the `--turbo` flag to the `next` commands, but it’s still experimental and doesn’t work for all setups. #### d. Set up build caching If you are using Docker, there are several ways to improve caching: - Optimize your build layers. - Use multi-stage builds. - Set up remote caching. When building locally, Docker caches by default and is fast, but in CI you usually don't have a permanent machine. You have to manually set up remote caching. Refer to [the Docker documentation](https://docs.docker.com/build/cache/) for more information. #### e. Globally cache everything If you want to really live with your hair on fire, in a good or maybe bad way, there is another style of caching that can give you big wins, as long as you have a javascript project. This caching style works by having a single shared cache between your entire team **and** also CI. This means if you build a commit locally and then push that commit to CI, the CI build will download the previous cached build from your local machine and be done in mere seconds. Or if someone else on your team already built a commit, and then you run the build locally, it will again download the cache and be done in mere seconds instead of building from scratch again. You might be thinking, “ok Brandon that’s cool and all, but usually we’re changing code so it won’t be cached”. But the trick is that this caching works on a **package** level, so assuming you have a monorepo, you are likely only working in one package at a time. So this enables you to iterate faster and not have to build things you aren’t working on. There are two tools that do this: [NX](https://nx.dev/) and [Turborepo](https://turbo.build/repo). There is a gotcha. You must be careful with how you handle environment variables. I’ve seen numerous people have production issues because a dev ran the production build locally but with dev or staging environment variables. And then CI uses the cached build with incorrect environment variables. Make sure you set up different commands for dev, staging, and production. ### 5. Replace staging with preview environments Preview environments are temporary environments, tied to the lifecycle of a pull request. When you open a pull request, infrastructure can be automatically provisioned for that PR. This enables stakeholders to see the changes in a production-like environment before it’s merged. And then, when the pull request is merged or closed, its environment will be automatically cleaned up. Preview environments are a better replacement for long-lived staging environments. Because staging is running all the time, whether it needs to or not. And you have to merge a PR before anyone can verify the change. Preview environments are a companion to feature flags. Larger changes should use feature flags and will have many PRs for that feature. But small changes are easier to review in a preview environment instead of going through the hassle of managing a feature flag for them. ### 6. Infrastructure-as-code Hopefully, you have automated deployments, but you might not have automated infrastructure. Without infrastructure-as-code (IaC), you typically define your infrastructure config by clicking through a dashboard. Bringing automation to your infrastructure config in the form of infrastructure-as-code has a huge number of benefits, including: - Repeatability or reproducibility - Consistency and standardization - Predictability - Version controlled - Collaboration and review IaC can take many forms. If using AWS, it can be low level like CloudFormation or Terraform. Or it can be at a higher level of abstraction, designed for product developers. Examples are Flightcontrol’s `flightcontrol.json` or Render’s `render.yaml`. ### 7. Platform Engineering Platform engineering might be a new term for you, so let me give you the back story that got us to this point. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/3svukz14kj3rgzlkuibg.png) Back in the old days, you had to rack your own servers in a data center. The abstraction level of infrastructure was extremely low. Then came AWS which introduced the cloud and raised the abstraction level. But it was still too low for the average developer. This led to the creation of Heroku, to the glee of developers all around the world. But that excitement was not to last because ops were not happy. It turns out abstractions like Heroku do not scale for large companies. So ops take over and create a new abstraction over AWS, infrastructure as code and Terraform. This works well, but it’s still too tedious for developers. Additionally, company leadership wants to increase operational efficiency, and one way to do that is to provide devs with self-serve infrastructure, so we, as an industry, started creating internal platforms that provide a better developer experience. Through this, we found a way for both ops and devs to be mostly happy. This has come to be known as platform engineering. Now many larger companies are building an internal platform, sometimes known as an internal developer platform. This platform can look more like an internal Heroku or more like plain Terraform. Either way, the three key concepts are: 1. Deploys to your own AWS/GCP account 2. Self-serve infrastructure with good developer experience 3. Ops can dictate and customize the underlying primitives. There are [many tools](https://platformengineering.org/platform-tooling) in the space, one of which is [Flightcontrol](https://www.flightcontrol.dev/). ### 8. Work as a team Back in the beginning of Flightcontrol, it was only my cofounder and me. We had a clear separation of duties aligned with our skillsets. He did all the backend/infra stuff, and I did all the web app and frontend things. It worked great! But then we started hiring more engineers. We initially kept the same work model where each person was off working on their own project or feature. It started to feel like something wasn’t right. We weren’t benefiting from collaboration. Since each person was working mostly alone, we weren’t benefiting from others’ expertise on everything from architecture design to code quality. We would review each other’s code, but since no one else had the full context of the project, they were only able to give superficial reviews. LGTM! Then, somewhere during this time, I ran across a [few](https://swizec.com/blog/reader-question-so-about-that-perfect-burndown-chart/) [blog](https://swizec.com/blog/reader-question-what-do-collaborative-teams-look-like/) [posts](https://swizec.com/blog/scaling-teams-is-a-technical-challenge/) by Swizec Teller, talking about how most teams don’t actually work as a team, and if you actually work as a team, you can be way more effective. > When we first started working as a team instead of a group of soloists, every story felt like a million gerbils running through my brain. > We moved faster. We got more done. We kept everyone in the loop. Shared more ideas. Found more edge cases. Created fewer bugs. And heard more opinions. > > —Swizec I read this, and my heart was saying, YES YES, but my mind was saying, I don’t understand! As a team, we decided to fundamentally shift how we worked. We’d no longer have multiple projects in progress at the same time. Instead, we’d have one project, and the entire team would work on it together from start to finish. Initially, it was scary, awkward, and felt like it wouldn’t work smoothly. And it was rough for a couple months until we got things ironed out. But now it’s many months later, and we can’t imagine working any other way. We can emphatically say that teams should work as a team. Everything has improved. We ship better features, have fewer bugs, and cover more edge cases. Job satisfaction and team morale increased. Because now they get true collaboration, not a “hi” as you pass them in the slack channel. How does this work? - Small team (2-6 people, 4 is ideal) - Entire team works on 1 project at a time, together. - Kickoff meeting (deep discussions on how to build) - Break project into small tasks (usually in kickoff) - Work on subtasks in parallel. - Small PRs, at least 1 per day. - Quick reviews. I.e. don’t be a blocker. - Unblock yourself; merge your own safe PR if no one is available to review and you are blocked. ### 9. Trunk based development The next level of team productivity is to get rid of pull requests and commit straight to the main branch. I’m serious. It’s called trunk based development. If this is a new concept to you, I know you’re thinking this is impossible; there is no way this can work. Even if you’ve heard of this concept before, you are probably thinking, “I have no idea how this could work”. ![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/xkaz5z7nqfyv0xl9s9z4.png) Think of it like a cafeteria. Your tray is the main branch. When the burger is done, it goes on the tray. When the fries are ready, they go on the tray. When the milk shake is poured, it goes on the tray. Once the tray is full, you’re done. That's how trunk-based development works. Every feature goes straight to main when it's ready. Whether subtasks or an entire project, it doesn’t matter because they're all working, independent, and deployable. If committing straight to main is too radical for you, then do the next best thing, which is short-lived branches. And by short-lived branches, I mean, **short** lived branches. You want to be as close as possible to pure trunk-based development as possible. PRs shouldn’t last more than about a day. For a project, you should have many short PRs that are quick for others to review and easy to merge. That’s how you get momentum. ## Conclusion We’ve covered seven principles and ten tactics for improving how you ship software. What do you think of these? Have things to add or take away?
flybayer
1,731,868
REACT COMPONENTS
INTRODUCTION In React, components are like building blocks of a webpage. They are small,...
0
2024-01-17T01:50:00
https://dev.to/pearlodi/react-components-3oa2
beginners, react, webdev, programming
## INTRODUCTION In React, components are like building blocks of a webpage. They are small, reusable pieces of code that handle specific tasks. Each component is responsible for a particular part of the user interface, making it easier to manage and organize the overall structure of a web application. For example, you create a "Button" component that represents the code for a button. This component defines how the button should look and what should happen when someone clicks on it. Once you've made this "Button" component, you can use it in different parts of your app – maybe for submitting forms, navigating to another page, or any other action that involves a button. **TYPES OF COMPONENTS** Now, when it comes to creating components, React gives you two primary types: functional components and class components. **Class Components** Class components in React are a bit like the older, more traditional way of creating components. They follow the syntax of JavaScript classes, which might look familiar if you've worked with object-oriented programming before. They have a more extensive feature set, including the ability to manage state and use lifecycle methods. Here is how you would create a simple class component: ``` // src/components/ClassComponent.js import React, { Component } from 'react'; class Button extends Component { render() { return ( <div> <button>Click me</button> </div> ); } } export default Button; ``` In class components, the `render` method is where you define what the component should render (display). It's a required method in class components and is responsible for returning the JSX that represents the component's UI. This method gets called automatically whenever the component needs to re-render, either because its props or state have changed, or because its parent component has re-rendered. **Functional Components** Functional components are like specialized, efficient JavaScript functions in React. They receive information called "props" (instructions), process them, and determine what should be displayed on the screen. They are known for their simplicity and readability. With the introduction of React hooks, functional components gained more capabilities, allowing them to manage internal data and respond to events traditionally handled by class components. Here is how you would create a simple functional component ``` import React from 'react'; // Functional component example function Button() { return <button>Click me</button>; } export default Button; ``` Here, the return statement inside the function body specifies what should be rendered. So, whenever `Button` is used, it will render the `<button>Click me</button>` element. Functional components in React are essentially "renderless" functions in the sense that they directly return JSX, unlike class components where you define a separate render method to specify the JSX to render. In both examples, we've created components called Button that returns a button element with a call to action 'click me'. Now lets look into how components are being used in other parts of your application. **Using a Component:** Once you've created a component, you can now import and then use it in other parts of your application. Here's an example of how you might use the Button component in another file: ``` import React from 'react'; import Button from './Button'; // Assuming Button component is in the same directory function App() { return ( <div> <h2>Welcome to My App</h2> <Button /> {/* Using the Button component */} </div> ); } export default App; ``` In the App component, we've imported the Button component and used it inside the `<div>` section. It's important to note that when we use `<Button />`, we're actually using a custom tag that represents the Button component we've created. This custom tag looks just like any other HTML tag, but instead of being a built-in element like `<div>` or `<p>`, it's a component that we've defined ourselves. Now Imagine if you had to create a button from scratch every time you needed one in your app – it would quickly become tedious and clutter up your code. Instead, by creating a reusable Button component, you can simplify your code and make it more readable. Think of it like this: imagine you have a Card component that contains lots of information – maybe an image, a title, and some text. Instead of writing out all that code every time you need a card, you can create a Card component once and then reuse it wherever you need it in your app. Whether it's a functional or class component, they both share a similar structure: - They import React from the 'react' package. - They define a component using either a function or a class. - They return JSX (JavaScript XML) that describes what should be rendered on the screen. - Finally, they export the component so that it can be used in other parts of the application. **Key concepts of components** **State:** State serves as the internal data storage for a component, allowing it to manage and update its information over time. Changes to state trigger re-renders, ensuring that the UI stays in sync with the component's data. State can be managed using the `useState` hook in functional components or the `this.state` mechanism in class components. **Props** Props on the other hand, act as inputs to a component, providing a way for the parent component to pass information down to its children. Think of it like this, what if in the button component you wanted to use a different text other than the `Click me` each time the component is used. That is where props comes in, it allows us to create dynamic and reusable components that adapt to different contexts within our application. **lifecycle methods** lifecycle methods that offer developers opportunities to hook into different stages of a component's lifecycle. From initialization to unmounting, these methods, such as `componentDidMount` and `componentWillUnmount`, allow for actions like fetching data, subscribing to events, or cleaning up resources. They empower developers to interact with the React ecosystem effectively and manage component behavior with precision. All of these **state**, **props**, and **lifecycle methods** are fundamental concepts that drive the behavior of React components. While we've introduced them briefly here, we'll delve deeper into each topic in our upcoming articles. ***************** **WHY USE COMPONENTS** A. **Modularity and Reusability:** - **Breakdown of UI:** Components allow breaking down the user interface into smaller, manageable pieces. Each component represents a specific part of the UI, making it easier to understand and maintain. - **Reusability:** Once created, components can be reused throughout the application or even in different projects. This reusability reduces redundancy in code and promotes a more efficient development process. B. **Maintenance and Scalability:** - **Easy to Maintain:** Components promote code organization, making it easier to locate and fix issues. Maintenance becomes more straightforward as changes can be isolated to specific components without affecting the entire application. - **Scalability:** As your application grows, the modular nature of components allows you to scale the development process. New features or sections of the application can be added by creating and integrating new components. C. **Readability and Understandability** - **Code Readability:** Components contribute to clean and readable code. By encapsulating specific functionality, components make the codebase more comprehensible. - **Ease of Onboarding:** For new developers joining a project, understanding and contributing to the codebase is more straightforward when components are well-structured and organized. **CONCLUSION** In this article, we've laid the groundwork for understanding React components, the fundamental building blocks of React applications. We've covered the basics, from defining what components are to differentiating between functional and class components. Understanding these concepts is crucial as they form the foundation of React development. While we've touched on key concepts like props, state, lifecycle methods, and event handling, it's important to note that we've only scratched the surface. In upcoming articles, we'll delve deeper into these topics, exploring how props and state drive component behavior and how lifecycle methods allow components to interact with the React ecosystem. So, stay tuned for more in-depth discussions on these topics. By building upon the knowledge gained here and exploring these concepts further, you'll be well-equipped to create dynamic and interactive user interfaces with React. Happy coding, and see you in the next article! ❤️
pearlodi
1,731,982
Learn PHP
Although php language seems to be dying, it is still used by many companies and is widely popular...
0
2024-01-17T04:30:17
https://dev.to/sagarkattel/learn-php-2ace
Although php language seems to be dying, it is still used by many companies and is widely popular among the legacy codebases of many large companies. It was the language that give rise to the current web development field. Learning PHP can also help you alot in your carrier. I guess these motivation is enough for you guys to start learning the php. So let's get started. Let's learn from the fundamentals. If you have seen my previous articles i follow certain pattern to learning new languages or framework. This pattern is aiding me to successfully learn the languages. 1. **Give Output and take Input** 2. **String** 3. **Array** 4. **Class and Object** 5. **Methods** 6. **Loops** I always follow this pattern to learn any languages. So without further a do let's get started. **1) Give Output and take Input** The command line used to give output in the php is simple ``` <? echo "Hello world"; //or print_r("Hello World"); ?> ``` And to take the input we enclosed it inside readlines ``` <? $name=readline("Enter the name\n"); echo "The name is $name"; ?> ``` **2) String and Variable** In PHP to create a string is as simple way as writing variable name starting with $ symbol. ``` <? echo strlen("Hello World"); echo str_word_count("Hello world!"); ?> ``` For Iterating in loop inside the string ``` <? $name="Sagarkattel"; for($i=0;$i<strlen($name);$i++){ echo "Index of ".$i."="."$name[$i]\n"; } ?> ``` For Splitting the string in PHP we use built-in explode function. ``` <? $name="Sagar kattel"; $parts=explode(" ",$name); echo $parts[0]; echo "\n"; echo $parts[1]; ?> ``` **3) Arrays** In php the way to iterate the array is as simple as any other language. Just you need to put $ symbol ahead of every variable name or array. `$names=["Sagar","Saurabh"];` **4) Class and Object** Creating classes in Php is dead simple, it is as same as other language. The only thing that you need to take care is to write instead of this.name you need to write this->name. ``` <?php class Fruit { public $name; public $color; function __construct($name, $color) { $this->name = $name; $this->color = $color; } function get_name() { return $this->name; } function get_color() { return $this->color; } } $apple = new Fruit("Apple", "red"); echo $apple->get_name(); echo "<br>"; echo $apple->get_color(); ?> ``` ``` <?php class Person{ public $name; public $age; function __construct($name,$age){ $this->name=$name; $this->age=$age; } function get_name(){ return $this->name; } function get_age(){ return $this->age; } } $name=new Person("Sagar Kattel",20); echo $name->get_name(); ?> ``` **5. Method or Function** You can create method or function in php by writing the keyword function ahead of every function name. ``` <?php echo "Hello WOrld"; function speak_name($name){ echo "The name of our hero is ".$name; } $number1=20; function multiply($number){ return $number*2; } $name="Sagar Kattel"; speak_name($name); $result=multiply(3); echo "The Multiply result is ".$result ?> ``` **6. Loops** You can create loops in php as any other programming lanaguages. ``` <?php $i=0; while($i<10){ print_r($i); $i++; } ?> ``` **These are the basics that you must know before diving deep into the PHP programming language. Hope you continue your learning journey. Sayonara <3**
sagarkattel
1,732,123
Creating a Custom Abstract User Model in Django
Django provides a default user model that is suitable for most projects. However, as the project...
0
2024-01-17T05:47:11
https://dev.to/arindam-sahoo/creating-a-custom-abstract-user-model-in-django-p24
django, python, user, model
Django provides a default user model that is suitable for most projects. However, as the project scales, a customized approach is often required. This guide will demonstrate how to create a custom abstract user model in Django, providing greater flexibility and customization options. This comprehensive tutorial will take you through each step, ensuring a solid understanding of the process. ## Step 1: Setting Up a New Django App To begin with the customization process, we need to create a fresh Django application that will act as the framework for our custom user model. Once this is accomplished, we can proceed with the necessary modifications to tailor the user model to our specific needs. ```bash python manage.py startapp myapp ``` Replace "myapp" with a name that aligns with your project. ## Step 2: Defining the Custom User Model To create a custom user model, we need to modify the `models.py` file located within the newly created app. This file contains the `AbstractBaseUser` and `PermissionsMixin` classes which we'll extend to create our custom model. By doing so, we can define the fields and attributes that are relevant to our application's requirements. To handle user creation, we'll also create a custom manager that will allow us to customize the process and ensure that it's consistent with our user model. With these modifications, we can create a custom user model that is tailored to our application's specific needs. ```python # myapp/models.py from django.contrib.auth.models import AbstractBaseUser, BaseUserManager, PermissionsMixin from django.db import models class CustomUserManager(BaseUserManager): def create_user(self, email, password=None, **extra_fields): if not email: raise ValueError('The Email field must be set') email = self.normalize_email(email) user = self.model(email=email, **extra_fields) user.set_password(password) user.save(using=self._db) return user def create_superuser(self, email, password=None, **extra_fields): extra_fields.setdefault('is_staff', True) extra_fields.setdefault('is_superuser', True) return self.create_user(email, password, **extra_fields) class CustomUser(AbstractBaseUser, PermissionsMixin): email = models.EmailField(unique=True) first_name = models.CharField(max_length=30) last_name = models.CharField(max_length=30) is_active = models.BooleanField(default=True) is_staff = models.BooleanField(default=False) objects = CustomUserManager() USERNAME_FIELD = 'email' REQUIRED_FIELDS = ['first_name', 'last_name'] def __str__(self): return self.email ``` In this particular instance, we have incorporated a personalized manager named `CustomUserManager` and formulated a distinctive model called `CustomUser`. This model comprises multiple fields, such as email, first name, last name, and others, that can be adjusted as per the specific needs of your project. ## Step 3: Updating Django Settings After creating a custom user model, the next step is to instruct Django to use it. This involves going to your project's `settings.py` file and modifying the `AUTH_USER_MODEL` setting accordingly. Once updated, Django will recognize your custom user model as the default user model for your project. ```python # settings.py AUTH_USER_MODEL = 'myapp.CustomUser' ``` Remember to replace `'myapp'` with the actual name of your app. ## Step 4: Creating and Applying Migrations To ensure your changes are reflected in the database, generate and apply migrations: ```bash python manage.py makemigrations python manage.py migrate ``` By completing this step, you will be able to successfully incorporate your personalized user model into the Django project, making it fully functional for authentication and authorization procedures. This integration will ensure that your custom user model is seamlessly incorporated into the project and can be utilized for secure and efficient user management. ## Flexibility in Authentication Django is a high-level Python web framework that offers a lot of flexibility to developers. One of the most powerful features of Django is the ability to create a custom abstract user model that enables developers to tailor authentication to their application's unique needs. By creating a custom abstract user model, you can define your own set of user fields, including custom fields, and set your own authentication rules. This ensures that your authentication system aligns precisely with your project's requirements, providing a more secure and personalized user experience. Creating a custom abstract user model is a straightforward process that involves a few simple steps. By following these steps, you can seamlessly integrate your personalized user model into your application, providing a foundation for a scalable and customizable user management system. Overall, the ability to create a custom abstract user model in Django is a powerful tool that empowers developers with the flexibility they need to build secure, personalized, and scalable web applications.
arindam-sahoo
1,732,183
Boostaro™ Supplement | OFFICIAL WEBSITE - Only $49/Bottle Today
Boostaro has emerged as a robust and natural solution within the competitive landscape of sexual...
0
2024-01-17T07:05:52
https://dev.to/boostaro/boostaro-supplement-official-website-only-49bottle-today-kpf
healthydebate, sexual, support, supplement
**[Boostaro](https://www.boost-aro.com/)** has emerged as a robust and natural solution within the competitive landscape of sexual health supplements, addressing a range of male sexual health concerns. This powerful supplement takes pride in its ability to enhance sexual drive and improve the quality of erections through a carefully crafted formula of clinically studied ingredients. Central to its efficacy is the promotion of optimal blood flow, a critical aspect often compromised in individuals dealing with erectile dysfunction. Boostaro works by fostering the production of nitric oxide, a vital compound that relaxes and opens up blood vessels, ensuring a steady and improved flow for healthier erections. Beyond addressing physiological aspects, the thoughtfully curated blend of essential nutrients in Boostaro brings significant improvements in energy levels, revitalizing vigor and offering a fresh lease on life. The effectiveness of the Boostaro formula extends to its production process, taking place in an FDA-approved and GMP-certified facility, demonstrating a commitment to stringent quality standards. Additionally, being a non-GMO solution minimizes the risk of unwanted side effects and prevents the development of habit-forming scenarios. The convenience of its capsule form enhances user-friendliness, making it easy to incorporate into daily routines. With an affordable pricing strategy,**[ Boostaro Supplement ](https://www.boost-aro.com/ )**establishes itself as a frontrunner in the natural enhancement of sexual function. Positive reviews and customer feedback attest to the supplement's transformative impact on sexual performance, offering a promise of renewed confidence and vitality in the journey to better sexual health.
boostaro
1,732,188
Information Overload and Chasing the trends Can Ruin Your Life:
Information Overload and Chasing the trends Can Ruin Your Life: Zayn and Dawood, were...
0
2024-01-17T07:11:47
https://dev.to/dev_abdulhaseeb/information-overload-and-chasing-the-trends-can-ruin-your-life-jjl
webdev, beginners, programming, productivity
![Image description](https://dev-to-uploads.s3.amazonaws.com/uploads/articles/lt44vad8qnni7icchktb.png) ## Information Overload and Chasing the trends Can Ruin Your Life: Zayn and Dawood, were good friends they went to same school lived in the same neighbour hood in a metropolitan city, and when the time came they went to the same college. Both of them were amazed by the tech and always enjoyed coding they had a dream to get a six-figure job and an enjoyable life, it was early days of 2023 and an AI Chatbot called ChatGPT has launched few months ago and everybody was talking about AI, it Was on the boom! Dawood and Zayn were pursuing Computer Science degree at a prestigious university. One day Elon Musk, who himself started as a programmer(and sold his first company for $300 million and now pioneering other industries),visited their college. Luckily Zayn and Dawood got the chance to meet him and asked for an advise. Elon told them to expand their knowledge landscape and try to be useful for others and try to make a positive impact. Days and months passed until one day... Dawood encounters alarming online predictions by youtubers and Social media Gurus that programmers won't exist in next 5 - 10 years, after going through a lot of biased information based on lies had left dawood doubting his career choice he had a lot of questions and he was really confused and tensed.... it was fine until he started believing the rumors without any research... Dawood kept telling Zayn that they are on wrong track and it's a coding is a dead end... Zain, unaffected by the noise, remains focused on skill-building and sets clear goals and kept working hard with the clear direction. Zain applied what elon told him,and he put his efforts into learning, building skills, and sharing his journey online. Dawood succumbs to the fear, quits college, and opts for shortcut courses with 0 value made by some random youtubers and gurus and started blindly following them and believing their claims of getting a job a very high paying job in Ai in just 3 months... Fast forward to 5 Years Later... Zain emerges as a CS graduate with an impressive portfolio, online presence, and $240k offer from a FAANG company. Dawood, chasing shortcuts and lacking a strategic approach, struggles in his career and can't even find a job, Severe stress and depression leads to Health issues and he start finding ways to escape the reality. On the other hand zayn gets a good and comfortable life while doing what he loves to do... ## Note : Dawood and zayn are fictional characters i used to explain my point to not to fell for these pitfalls... Now i want to ask you Who you want to be Is it Dawood or Zayn? (comment) ## Lessons Learned: Stick to one thing, And Avoid Information Overload no matter what, overcome hype, and believe in yourself. Zain's succeeded because he was consistent and he has perseverance. Beware of shortcuts; they may promise quick gains but often lead to long-term struggles. The tech industry's demand for skilled programmers remains high, but success requires dedication, not shortcuts Always Remember this "Programmers ain't going extinct. The world craves your magic. Just put in the work." Good Luck and Happy Coding!
dev_abdulhaseeb