url stringlengths 11 2.25k | text stringlengths 88 50k | ts timestamp[s]date 2026-01-13 08:47:33 2026-01-13 09:30:40 |
|---|---|---|
http://anh.cs.luc.edu/handsonPythonTutorial/ifstatements.html#grade-exercise | 3.1. If Statements — Hands-on Python Tutorial for Python 3 Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » 3.1. If Statements ¶ 3.1.1. Simple Conditions ¶ The statements introduced in this chapter will involve tests or conditions . More syntax for conditions will be introduced later, but for now consider simple arithmetic comparisons that directly translate from math into Python. Try each line separately in the Shell 2 < 5 3 > 7 x = 11 x > 10 2 * x < x type ( True ) You see that conditions are either True or False . These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python the name Boolean is shortened to the type bool . It is the type of the results of true-false conditions or tests. Note The Boolean values True and False have no quotes around them! Just as '123' is a string and 123 without the quotes is not, 'True' is a string, not of type bool. 3.1.2. Simple if Statements ¶ Run this example program, suitcase.py. Try it at least twice, with inputs: 30 and then 55. As you an see, you get an extra result, depending on the input. The main code is: weight = float ( input ( "How many pounds does your suitcase weigh? " )) if weight > 50 : print ( "There is a $25 charge for luggage that heavy." ) print ( "Thank you for your business." ) The middle two line are an if statement. It reads pretty much like English. If it is true that the weight is greater than 50, then print the statement about an extra charge. If it is not true that the weight is greater than 50, then don’t do the indented part: skip printing the extra luggage charge. In any event, when you have finished with the if statement (whether it actually does anything or not), go on to the next statement that is not indented under the if . In this case that is the statement printing “Thank you”. The general Python syntax for a simple if statement is if condition : indentedStatementBlock If the condition is true, then do the indented statements. If the condition is not true, then skip the indented statements. Another fragment as an example: if balance < 0 : transfer = - balance # transfer enough from the backup account: backupAccount = backupAccount - transfer balance = balance + transfer As with other kinds of statements with a heading and an indented block, the block can have more than one statement. The assumption in the example above is that if an account goes negative, it is brought back to 0 by transferring money from a backup account in several steps. In the examples above the choice is between doing something (if the condition is True ) or nothing (if the condition is False ). Often there is a choice of two possibilities, only one of which will be done, depending on the truth of a condition. 3.1.3. if - else Statements ¶ Run the example program, clothes.py . Try it at least twice, with inputs 50 and then 80. As you can see, you get different results, depending on the input. The main code of clothes.py is: temperature = float ( input ( 'What is the temperature? ' )) if temperature > 70 : print ( 'Wear shorts.' ) else : print ( 'Wear long pants.' ) print ( 'Get some exercise outside.' ) The middle four lines are an if-else statement. Again it is close to English, though you might say “otherwise” instead of “else” (but else is shorter!). There are two indented blocks: One, like in the simple if statement, comes right after the if heading and is executed when the condition in the if heading is true. In the if - else form this is followed by an else: line, followed by another indented block that is only executed when the original condition is false . In an if - else statement exactly one of two possible indented blocks is executed. A line is also shown de dented next, removing indentation, about getting exercise. Since it is dedented, it is not a part of the if-else statement: Since its amount of indentation matches the if heading, it is always executed in the normal forward flow of statements, after the if - else statement (whichever block is selected). The general Python if - else syntax is if condition : indentedStatementBlockForTrueCondition else: indentedStatementBlockForFalseCondition These statement blocks can have any number of statements, and can include about any kind of statement. See Graduate Exercise 3.1.4. More Conditional Expressions ¶ All the usual arithmetic comparisons may be made, but many do not use standard mathematical symbolism, mostly for lack of proper keys on a standard keyboard. Meaning Math Symbol Python Symbols Less than < < Greater than > > Less than or equal ≤ <= Greater than or equal ≥ >= Equals = == Not equal ≠ != There should not be space between the two-symbol Python substitutes. Notice that the obvious choice for equals , a single equal sign, is not used to check for equality. An annoying second equal sign is required. This is because the single equal sign is already used for assignment in Python, so it is not available for tests. Warning It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment! Tests for equality do not make an assignment, and they do not require a variable on the left. Any expressions can be tested for equality or inequality ( != ). They do not need to be numbers! Predict the results and try each line in the Shell : x = 5 x x == 5 x == 6 x x != 6 x = 6 6 == x 6 != x 'hi' == 'h' + 'i' 'HI' != 'hi' [ 1 , 2 ] != [ 2 , 1 ] An equality check does not make an assignment. Strings are case sensitive. Order matters in a list. Try in the Shell : 'a' > 5 When the comparison does not make sense, an Exception is caused. [1] Following up on the discussion of the inexactness of float arithmetic in String Formats for Float Precision , confirm that Python does not consider .1 + .2 to be equal to .3: Write a simple condition into the Shell to test. Here is another example: Pay with Overtime. Given a person’s work hours for the week and regular hourly wage, calculate the total pay for the week, taking into account overtime. Hours worked over 40 are overtime, paid at 1.5 times the normal rate. This is a natural place for a function enclosing the calculation. Read the setup for the function: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' The problem clearly indicates two cases: when no more than 40 hours are worked or when more than 40 hours are worked. In case more than 40 hours are worked, it is convenient to introduce a variable overtimeHours. You are encouraged to think about a solution before going on and examining mine. You can try running my complete example program, wages.py, also shown below. The format operation at the end of the main function uses the floating point format ( String Formats for Float Precision ) to show two decimal places for the cents in the answer: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' if totalHours <= 40 : totalWages = hourlyWage * totalHours else : overtime = totalHours - 40 totalWages = hourlyWage * 40 + ( 1.5 * hourlyWage ) * overtime return totalWages def main (): hours = float ( input ( 'Enter hours worked: ' )) wage = float ( input ( 'Enter dollars paid per hour: ' )) total = calcWeeklyWages ( hours , wage ) print ( 'Wages for {hours} hours at ${wage:.2f} per hour are ${total:.2f}.' . format ( ** locals ())) main () Here the input was intended to be numeric, but it could be decimal so the conversion from string was via float , not int . Below is an equivalent alternative version of the body of calcWeeklyWages , used in wages1.py . It uses just one general calculation formula and sets the parameters for the formula in the if statement. There are generally a number of ways you might solve the same problem! if totalHours <= 40 : regularHours = totalHours overtime = 0 else : overtime = totalHours - 40 regularHours = 40 return hourlyWage * regularHours + ( 1.5 * hourlyWage ) * overtime The in boolean operator : There are also Boolean operators that are applied to types others than numbers. A useful Boolean operator is in , checking membership in a sequence: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' in vals True >>> 'was' in vals False It can also be used with not , as not in , to mean the opposite: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' not in vals False >>> 'was' not in vals True In general the two versions are: item in sequence item not in sequence Detecting the need for if statements : Like with planning programs needing``for`` statements, you want to be able to translate English descriptions of problems that would naturally include if or if - else statements. What are some words or phrases or ideas that suggest the use of these statements? Think of your own and then compare to a few I gave: [2] 3.1.4.1. Graduate Exercise ¶ Write a program, graduate.py , that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.) 3.1.4.2. Head or Tails Exercise ¶ Write a program headstails.py . It should include a function flip() , that simulates a single flip of a coin: It randomly prints either Heads or Tails . Accomplish this by choosing 0 or 1 arbitrarily with random.randrange(2) , and use an if - else statement to print Heads when the result is 0, and Tails otherwise. In your main program have a simple repeat loop that calls flip() 10 times to test it, so you generate a random sequence of 10 Heads and Tails . 3.1.4.3. Strange Function Exercise ¶ Save the example program jumpFuncStub.py as jumpFunc.py , and complete the definitions of functions jump and main as described in the function documentation strings in the program. In the jump function definition use an if - else statement (hint [3] ). In the main function definition use a for -each loop, the range function, and the jump function. The jump function is introduced for use in Strange Sequence Exercise , and others after that. 3.1.5. Multiple Tests and if - elif Statements ¶ Often you want to distinguish between more than two distinct cases, but conditions only have two possible results, True or False , so the only direct choice is between two options. As anyone who has played “20 Questions” knows, you can distinguish more cases by further questions. If there are more than two choices, a single test may only reduce the possibilities, but further tests can reduce the possibilities further and further. Since most any kind of statement can be placed in an indented statement block, one choice is a further if statement. For instance consider a function to convert a numerical grade to a letter grade, ‘A’, ‘B’, ‘C’, ‘D’ or ‘F’, where the cutoffs for ‘A’, ‘B’, ‘C’, and ‘D’ are 90, 80, 70, and 60 respectively. One way to write the function would be test for one grade at a time, and resolve all the remaining possibilities inside the next else clause: def letterGrade ( score ): if score >= 90 : letter = 'A' else : # grade must be B, C, D or F if score >= 80 : letter = 'B' else : # grade must be C, D or F if score >= 70 : letter = 'C' else : # grade must D or F if score >= 60 : letter = 'D' else : letter = 'F' return letter This repeatedly increasing indentation with an if statement as the else block can be annoying and distracting. A preferred alternative in this situation, that avoids all this indentation, is to combine each else and if block into an elif block: def letterGrade ( score ): if score >= 90 : letter = 'A' elif score >= 80 : letter = 'B' elif score >= 70 : letter = 'C' elif score >= 60 : letter = 'D' else : letter = 'F' return letter The most elaborate syntax for an if - elif - else statement is indicated in general below: if condition1 : indentedStatementBlockForTrueCondition1 elif condition2 : indentedStatementBlockForFirstTrueCondition2 elif condition3 : indentedStatementBlockForFirstTrueCondition3 elif condition4 : indentedStatementBlockForFirstTrueCondition4 else: indentedStatementBlockForEachConditionFalse The if , each elif , and the final else lines are all aligned. There can be any number of elif lines, each followed by an indented block. (Three happen to be illustrated above.) With this construction exactly one of the indented blocks is executed. It is the one corresponding to the first True condition, or, if all conditions are False , it is the block after the final else line. Be careful of the strange Python contraction. It is elif , not elseif . A program testing the letterGrade function is in example program grade1.py . See Grade Exercise . A final alternative for if statements: if - elif -.... with no else . This would mean changing the syntax for if - elif - else above so the final else: and the block after it would be omitted. It is similar to the basic if statement without an else , in that it is possible for no indented block to be executed. This happens if none of the conditions in the tests are true. With an else included, exactly one of the indented blocks is executed. Without an else , at most one of the indented blocks is executed. if weight > 120 : print ( 'Sorry, we can not take a suitcase that heavy.' ) elif weight > 50 : print ( 'There is a $25 charge for luggage that heavy.' ) This if - elif statement only prints a line if there is a problem with the weight of the suitcase. 3.1.5.1. Sign Exercise ¶ Write a program sign.py to ask the user for a number. Print out which category the number is in: 'positive' , 'negative' , or 'zero' . 3.1.5.2. Grade Exercise ¶ In Idle, load grade1.py and save it as grade2.py Modify grade2.py so it has an equivalent version of the letterGrade function that tests in the opposite order, first for F, then D, C, .... Hint: How many tests do you need to do? [4] Be sure to run your new version and test with different inputs that test all the different paths through the program. Be careful to test around cut-off points. What does a grade of 79.6 imply? What about exactly 80? 3.1.5.3. Wages Exercise ¶ * Modify the wages.py or the wages1.py example to create a program wages2.py that assumes people are paid double time for hours over 60. Hence they get paid for at most 20 hours overtime at 1.5 times the normal rate. For example, a person working 65 hours with a regular wage of $10 per hour would work at $10 per hour for 40 hours, at 1.5 * $10 for 20 hours of overtime, and 2 * $10 for 5 hours of double time, for a total of 10*40 + 1.5*10*20 + 2*10*5 = $800. You may find wages1.py easier to adapt than wages.py . Be sure to test all paths through the program! Your program is likely to be a modification of a program where some choices worked before, but once you change things, retest for all the cases! Changes can mess up things that worked before. 3.1.6. Nesting Control-Flow Statements ¶ The power of a language like Python comes largely from the variety of ways basic statements can be combined . In particular, for and if statements can be nested inside each other’s indented blocks. For example, suppose you want to print only the positive numbers from an arbitrary list of numbers in a function with the following heading. Read the pieces for now. def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' For example, suppose numberList is [3, -5, 2, -1, 0, 7] . You want to process a list, so that suggests a for -each loop, for num in numberList : but a for -each loop runs the same code body for each element of the list, and we only want print ( num ) for some of them. That seems like a major obstacle, but think closer at what needs to happen concretely. As a human, who has eyes of amazing capacity, you are drawn immediately to the actual correct numbers, 3, 2, and 7, but clearly a computer doing this systematically will have to check every number. In fact, there is a consistent action required: Every number must be tested to see if it should be printed. This suggests an if statement, with the condition num > 0 . Try loading into Idle and running the example program onlyPositive.py , whose code is shown below. It ends with a line testing the function: def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' for num in numberList : if num > 0 : print ( num ) printAllPositive ([ 3 , - 5 , 2 , - 1 , 0 , 7 ]) This idea of nesting if statements enormously expands the possibilities with loops. Now different things can be done at different times in loops, as long as there is a consistent test to allow a choice between the alternatives. Shortly, while loops will also be introduced, and you will see if statements nested inside of them, too. The rest of this section deals with graphical examples. Run example program bounce1.py . It has a red ball moving and bouncing obliquely off the edges. If you watch several times, you should see that it starts from random locations. Also you can repeat the program from the Shell prompt after you have run the script. For instance, right after running the program, try in the Shell bounceBall ( - 3 , 1 ) The parameters give the amount the shape moves in each animation step. You can try other values in the Shell , preferably with magnitudes less than 10. For the remainder of the description of this example, read the extracted text pieces. The animations before this were totally scripted, saying exactly how many moves in which direction, but in this case the direction of motion changes with every bounce. The program has a graphic object shape and the central animation step is shape . move ( dx , dy ) but in this case, dx and dy have to change when the ball gets to a boundary. For instance, imagine the ball getting to the left side as it is moving to the left and up. The bounce obviously alters the horizontal part of the motion, in fact reversing it, but the ball would still continue up. The reversal of the horizontal part of the motion means that the horizontal shift changes direction and therefore its sign: dx = - dx but dy does not need to change. This switch does not happen at each animation step, but only when the ball reaches the edge of the window. It happens only some of the time - suggesting an if statement. Still the condition must be determined. Suppose the center of the ball has coordinates (x, y). When x reaches some particular x coordinate, call it xLow, the ball should bounce. The edge of the window is at coordinate 0, but xLow should not be 0, or the ball would be half way off the screen before bouncing! For the edge of the ball to hit the edge of the screen, the x coordinate of the center must be the length of the radius away, so actually xLow is the radius of the ball. Animation goes quickly in small steps, so I cheat. I allow the ball to take one (small, quick) step past where it really should go ( xLow ), and then we reverse it so it comes back to where it belongs. In particular if x < xLow : dx = - dx There are similar bounding variables xHigh , yLow and yHigh , all the radius away from the actual edge coordinates, and similar conditions to test for a bounce off each possible edge. Note that whichever edge is hit, one coordinate, either dx or dy, reverses. One way the collection of tests could be written is if x < xLow : dx = - dx if x > xHigh : dx = - dx if y < yLow : dy = - dy if y > yHigh : dy = - dy This approach would cause there to be some extra testing: If it is true that x < xLow , then it is impossible for it to be true that x > xHigh , so we do not need both tests together. We avoid unnecessary tests with an elif clause (for both x and y): if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy Note that the middle if is not changed to an elif , because it is possible for the ball to reach a corner , and need both dx and dy reversed. The program also uses several methods to read part of the state of graphics objects that we have not used in examples yet. Various graphics objects, like the circle we are using as the shape, know their center point, and it can be accessed with the getCenter() method. (Actually a clone of the point is returned.) Also each coordinate of a Point can be accessed with the getX() and getY() methods. This explains the new features in the central function defined for bouncing around in a box, bounceInBox . The animation arbitrarily goes on in a simple repeat loop for 600 steps. (A later example will improve this behavior.) def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) The program starts the ball from an arbitrary point inside the allowable rectangular bounds. This is encapsulated in a utility function included in the program, getRandomPoint . The getRandomPoint function uses the randrange function from the module random . Note that in parameters for both the functions range and randrange , the end stated is past the last value actually desired: def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) The full program is listed below, repeating bounceInBox and getRandomPoint for completeness. Several parts that may be useful later, or are easiest to follow as a unit, are separated out as functions. Make sure you see how it all hangs together or ask questions! ''' Show a ball bouncing off the sides of the window. ''' from graphics import * import time , random def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) def makeDisk ( center , radius , win ): '''return a red disk that is drawn in win with given center and radius.''' disk = Circle ( center , radius ) disk . setOutline ( "red" ) disk . setFill ( "red" ) disk . draw ( win ) return disk def bounceBall ( dx , dy ): '''Make a ball bounce around the screen, initially moving by (dx, dy) at each jump.''' win = GraphWin ( 'Ball Bounce' , 290 , 290 ) win . yUp () radius = 10 xLow = radius # center is separated from the wall by the radius at a bounce xHigh = win . getWidth () - radius yLow = radius yHigh = win . getHeight () - radius center = getRandomPoint ( xLow , xHigh , yLow , yHigh ) ball = makeDisk ( center , radius , win ) bounceInBox ( ball , dx , dy , xLow , xHigh , yLow , yHigh ) win . close () bounceBall ( 3 , 5 ) 3.1.6.1. Short String Exercise ¶ Write a program short.py with a function printShort with heading: def printShort ( strings ): '''Given a list of strings, print the ones with at most three characters. >>> printShort(['a', 'long', one']) a one ''' In your main program, test the function, calling it several times with different lists of strings. Hint: Find the length of each string with the len function. The function documentation here models a common approach: illustrating the behavior of the function with a Python Shell interaction. This part begins with a line starting with >>> . Other exercises and examples will also document behavior in the Shell. 3.1.6.2. Even Print Exercise ¶ Write a program even1.py with a function printEven with heading: def printEven ( nums ): '''Given a list of integers nums, print the even ones. >>> printEven([4, 1, 3, 2, 7]) 4 2 ''' In your main program, test the function, calling it several times with different lists of integers. Hint: A number is even if its remainder, when dividing by 2, is 0. 3.1.6.3. Even List Exercise ¶ Write a program even2.py with a function chooseEven with heading: def chooseEven ( nums ): '''Given a list of integers, nums, return a list containing only the even ones. >>> chooseEven([4, 1, 3, 2, 7]) [4, 2] ''' In your main program, test the function, calling it several times with different lists of integers and printing the results in the main program. (The documentation string illustrates the function call in the Python shell, where the return value is automatically printed. Remember, that in a program, you only print what you explicitly say to print.) Hint: In the function, create a new list, and append the appropriate numbers to it, before returning the result. 3.1.6.4. Unique List Exercise ¶ * The madlib2.py program has its getKeys function, which first generates a list of each occurrence of a cue in the story format. This gives the cues in order, but likely includes repetitions. The original version of getKeys uses a quick method to remove duplicates, forming a set from the list. There is a disadvantage in the conversion, though: Sets are not ordered, so when you iterate through the resulting set, the order of the cues will likely bear no resemblance to the order they first appeared in the list. That issue motivates this problem: Copy madlib2.py to madlib2a.py , and add a function with this heading: def uniqueList ( aList ): ''' Return a new list that includes the first occurrence of each value in aList, and omits later repeats. The returned list should include the first occurrences of values in aList in their original order. >>> vals = ['cat', 'dog', 'cat', 'bug', 'dog', 'ant', 'dog', 'bug'] >>> uniqueList(vals) ['cat', 'dog', 'bug', 'ant'] ''' Hint: Process aList in order. Use the in syntax to only append elements to a new list that are not already in the new list. After perfecting the uniqueList function, replace the last line of getKeys , so it uses uniqueList to remove duplicates in keyList . Check that your madlib2a.py prompts you for cue values in the order that the cues first appear in the madlib format string. 3.1.7. Compound Boolean Expressions ¶ To be eligible to graduate from Loyola University Chicago, you must have 120 credits and a GPA of at least 2.0. This translates directly into Python as a compound condition : credits >= 120 and GPA >= 2.0 This is true if both credits >= 120 is true and GPA >= 2.0 is true. A short example program using this would be: credits = float ( input ( 'How many units of credit do you have? ' )) GPA = float ( input ( 'What is your GPA? ' )) if credits >= 120 and GPA >= 2.0 : print ( 'You are eligible to graduate!' ) else : print ( 'You are not eligible to graduate.' ) The new Python syntax is for the operator and : condition1 and condition2 The compound condition is true if both of the component conditions are true. It is false if at least one of the conditions is false. See Congress Exercise . In the last example in the previous section, there was an if - elif statement where both tests had the same block to be done if the condition was true: if x < xLow : dx = - dx elif x > xHigh : dx = - dx There is a simpler way to state this in a sentence: If x < xLow or x > xHigh, switch the sign of dx. That translates directly into Python: if x < xLow or x > xHigh : dx = - dx The word or makes another compound condition: condition1 or condition2 is true if at least one of the conditions is true. It is false if both conditions are false. This corresponds to one way the word “or” is used in English. Other times in English “or” is used to mean exactly one alternative is true. Warning When translating a problem stated in English using “or”, be careful to determine whether the meaning matches Python’s or . It is often convenient to encapsulate complicated tests inside a function. Think how to complete the function starting: def isInside ( rect , point ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () Recall that a Rectangle is specified in its constructor by two diagonally oppose Point s. This example gives the first use in the tutorials of the Rectangle methods that recover those two corner points, getP1 and getP2 . The program calls the points obtained this way pt1 and pt2 . The x and y coordinates of pt1 , pt2 , and point can be recovered with the methods of the Point type, getX() and getY() . Suppose that I introduce variables for the x coordinates of pt1 , point , and pt2 , calling these x-coordinates end1 , val , and end2 , respectively. On first try you might decide that the needed mathematical relationship to test is end1 <= val <= end2 Unfortunately, this is not enough: The only requirement for the two corner points is that they be diagonally opposite, not that the coordinates of the second point are higher than the corresponding coordinates of the first point. It could be that end1 is 200; end2 is 100, and val is 120. In this latter case val is between end1 and end2 , but substituting into the expression above 200 <= 120 <= 100 is False. The 100 and 200 need to be reversed in this case. This makes a complicated situation. Also this is an issue which must be revisited for both the x and y coordinates. I introduce an auxiliary function isBetween to deal with one coordinate at a time. It starts: def isBetween ( val , end1 , end2 ): '''Return True if val is between the ends. The ends do not need to be in increasing order.''' Clearly this is true if the original expression, end1 <= val <= end2 , is true. You must also consider the possible case when the order of the ends is reversed: end2 <= val <= end1 . How do we combine these two possibilities? The Boolean connectives to consider are and and or . Which applies? You only need one to be true, so or is the proper connective: A correct but redundant function body would be: if end1 <= val <= end2 or end2 <= val <= end1 : return True else : return False Check the meaning: if the compound expression is True , return True . If the condition is False , return False – in either case return the same value as the test condition. See that a much simpler and neater version is to just return the value of the condition itself! return end1 <= val <= end2 or end2 <= val <= end1 Note In general you should not need an if - else statement to choose between true and false values! Operate directly on the boolean expression. A side comment on expressions like end1 <= val <= end2 Other than the two-character operators, this is like standard math syntax, chaining comparisons. In Python any number of comparisons can be chained in this way, closely approximating mathematical notation. Though this is good Python, be aware that if you try other high-level languages like Java and C++, such an expression is gibberish. Another way the expression can be expressed (and which translates directly to other languages) is: end1 <= val and val <= end2 So much for the auxiliary function isBetween . Back to the isInside function. You can use the isBetween function to check the x coordinates, isBetween ( point . getX (), p1 . getX (), p2 . getX ()) and to check the y coordinates, isBetween ( point . getY (), p1 . getY (), p2 . getY ()) Again the question arises: how do you combine the two tests? In this case we need the point to be both between the sides and between the top and bottom, so the proper connector is and . Think how to finish the isInside method. Hint: [5] Sometimes you want to test the opposite of a condition. As in English you can use the word not . For instance, to test if a Point was not inside Rectangle Rect, you could use the condition not isInside ( rect , point ) In general, not condition is True when condition is False , and False when condition is True . The example program chooseButton1.py , shown below, is a complete program using the isInside function in a simple application, choosing colors. Pardon the length. Do check it out. It will be the starting point for a number of improvements that shorten it and make it more powerful in the next section. First a brief overview: The program includes the functions isBetween and isInside that have already been discussed. The program creates a number of colored rectangles to use as buttons and also as picture components. Aside from specific data values, the code to create each rectangle is the same, so the action is encapsulated in a function, makeColoredRect . All of this is fine, and will be preserved in later versions. The present main function is long, though. It has the usual graphics starting code, draws buttons and picture elements, and then has a number of code sections prompting the user to choose a color for a picture element. Each code section has a long if - elif - else test to see which button was clicked, and sets the color of the picture element appropriately. '''Make a choice of colors via mouse clicks in Rectangles -- A demonstration of Boolean operators and Boolean functions.''' from graphics import * def isBetween ( x , end1 , end2 ): '''Return True if x is between the ends or equal to either. The ends do not need to be in increasing order.''' return end1 <= x <= end2 or end2 <= x <= end1 def isInside ( point , rect ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) def makeColoredRect ( corner , width , height , color , win ): ''' Return a Rectangle drawn in win with the upper left corner and color specified.''' corner2 = corner . clone () corner2 . move ( width , - height ) rect = Rectangle ( corner , corner2 ) rect . setFill ( color ) rect . draw ( win ) return rect def main (): win = GraphWin ( 'pick Colors' , 400 , 400 ) win . yUp () # right side up coordinates redButton = makeColoredRect ( Point ( 310 , 350 ), 80 , 30 , 'red' , win ) yellowButton = makeColoredRect ( Point ( 310 , 310 ), 80 , 30 , 'yellow' , win ) blueButton = makeColoredRect ( Point ( 310 , 270 ), 80 , 30 , 'blue' , win ) house = makeColoredRect ( Point ( 60 , 200 ), 180 , 150 , 'gray' , win ) door = makeColoredRect ( Point ( 90 , 150 ), 40 , 100 , 'white' , win ) roof = Polygon ( Point ( 50 , 200 ), Point ( 250 , 200 ), Point ( 150 , 300 )) roof . setFill ( 'black' ) roof . draw ( win ) msg = Text ( Point ( win . getWidth () / 2 , 375 ), 'Click to choose a house color.' ) msg . draw ( win ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' house . setFill ( color ) msg . setText ( 'Click to choose a door color.' ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' door . setFill ( color ) win . promptClose ( msg ) main () The only further new feature used is in the long return statement in isInside . return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) Recall that Python is smart enough to realize that a statement continues to the next line if there is an unmatched pair of parentheses or brackets. Above is another situation with a long statement, but there are no unmatched parentheses on a line. For readability it is best not to make an enormous long line that would run off your screen or paper. Continuing to the next line is recommended. You can make the final character on a line be a backslash ( '\\' ) to indicate the statement continues on the next line. This is not particularly neat, but it is a rather rare situation. Most statements fit neatly on one line, and the creator of Python decided it was best to make the syntax simple in the most common situation. (Many other languages require a special statement terminator symbol like ‘;’ and pay no attention to newlines). Extra parentheses here would not hurt, so an alternative would be return ( isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) ) The chooseButton1.py program is long partly because of repeated code. The next section gives another version involving lists. 3.1.7.1. Congress Exercise ¶ A person is eligible to be a US Senator who is at least 30 years old and has been a US citizen for at least 9 years. Write an initial version of a program congress.py to obtain age and length of citizenship from the user and print out if a person is eligible to be a Senator or not. A person is eligible to be a US Representative who is at least 25 years old and has been a US citizen for at least 7 years. Elaborate your program congress.py so it obtains age and length of citizenship and prints out just the one of the following three statements that is accurate: You are eligible for both the House and Senate. You eligible only for the House. You are ineligible for Congress. 3.1.8. More String Methods ¶ Here are a few more string methods useful in the next exercises, assuming the methods are applied to a string s : s .startswith( pre ) returns True if string s starts with string pre : Both '-123'.startswith('-') and 'downstairs'.startswith('down') are True , but '1 - 2 - 3'.startswith('-') is False . s .endswith( suffix ) returns True if string s ends with string suffix : Both 'whoever'.endswith('ever') and 'downstairs'.endswith('airs') are True , but '1 - 2 - 3'.endswith('-') is False . s .replace( sub , replacement , count ) returns a new string with up to the first count occurrences of string sub replaced by replacement . The replacement can be the empty string to delete sub . For example: s = '-123' t = s . replace ( '-' , '' , 1 ) # t equals '123' t = t . replace ( '-' , '' , 1 ) # t is still equal to '123' u = '.2.3.4.' v = u . replace ( '.' , '' , 2 ) # v equals '23.4.' w = u . replace ( '.' , ' dot ' , 5 ) # w equals '2 dot 3 dot 4 dot ' 3.1.8.1. Article Start Exercise ¶ In library alphabetizing, if the initial word is an article (“The”, “A”, “An”), then it is ignored when ordering entries. Write a program completing this function, and then testing it: def startsWithArticle ( title ): '''Return True if the first word of title is "The", "A" or "An".''' Be careful, if the title starts with “There”, it does not start with an article. What should you be testing for? 3.1.8.2. Is Number String Exercise ¶ ** In the later Safe Number Input Exercise , it will be important to know if a string can be converted to the desired type of number. Explore that here. Save example isNumberStringStub.py as isNumberString.py and complete it. It contains headings and documentation strings for the functions in both parts of this exercise. A legal whole number string consists entirely of digits. Luckily strings have an isdigit method, which is true when a nonempty string consists entirely of digits, so '2397'.isdigit() returns True , and '23a'.isdigit() returns False , exactly corresponding to the situations when the string represents a whole number! In both parts be sure to test carefully. Not only confirm that all appropriate strings return True . Also be sure to test that you return False for all sorts of bad strings. Recognizing an integer string is more involved, since it can start with a minus sign (or not). Hence the isdigit method is not enough by itself. This part is the most straightforward if you have worked on the sections String Indices and String Slices . An alternate approach works if you use the count method from Object Orientation , and some methods from this section. Complete the function isIntStr . Complete the function isDecimalStr , which introduces the possibility of a decimal point (though a decimal point is not required). The string methods mentioned in the previous part remain useful. [1] This is an improvement that is new in Python 3. [2] “In this case do ___; otherwise”, “if ___, then”, “when ___ is true, then”, “___ depends on whether”, [3] If you divide an even number by 2, what is the remainder? Use this idea in your if condition. [4] 4 tests to distinguish the 5 cases, as in the previous version [5] Once again, you are calculating and returning a Boolean result. You do not need an if - else statement. Table Of Contents 3.1. If Statements 3.1.1. Simple Conditions 3.1.2. Simple if Statements 3.1.3. if - else Statements 3.1.4. More Conditional Expressions 3.1.4.1. Graduate Exercise 3.1.4.2. Head or Tails Exercise 3.1.4.3. Strange Function Exercise 3.1.5. Multiple Tests and if - elif Statements 3.1.5.1. Sign Exercise 3.1.5.2. Grade Exercise 3.1.5.3. Wages Exercise 3.1.6. Nesting Control-Flow Statements 3.1.6.1. Short String Exercise 3.1.6.2. Even Print Exercise 3.1.6.3. Even List Exercise 3.1.6.4. Unique List Exercise 3.1.7. Compound Boolean Expressions 3.1.7.1. Congress Exercise 3.1.8. More String Methods 3.1.8.1. Article Start Exercise 3.1.8.2. Is Number String Exercise Previous topic 3. More On Flow of Control Next topic 3.2. Loops and Tuples This Page Show Source Quick search Enter search terms or a module, class or function name. Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » © Copyright 2019, Dr. Andrew N. Harrington. Last updated on Jan 05, 2020. Created using Sphinx 1.3.1+. | 2026-01-13T09:30:35 |
https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.instagram.com%2F&h=AT17VNElMkFvmm6EIADtvRn4FajkzLzypRqZU_C6kdboqaC89ap56pOIRAaRBr2jhCkn8AW_L7bS-6j9V5MGHHNvs8faWoIqMyK8HuGbbAj93FpdUUj0kRJKTxRmYS_8MnEBqEvHSR06a8Sa | Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonCloudFront/latest/DeveloperGuide/DownloadDistValuesCacheBehavior.html#DownloadDistValuesForwardHeaders | Configuración del comportamiento de la caché - Amazon CloudFront Configuración del comportamiento de la caché - Amazon CloudFront Documentación Amazon CloudFront Guía para desarrolladores Patrón de ruta Origen o grupo de origen Política de protocolo para lectores Métodos HTTP permitidos Configuración de cifrado de nivel de campo Métodos HTTP almacenados en caché Cómo permitir solicitudes de gRPC a través de HTTP/2 Caché en función de encabezados de solicitud seleccionados Encabezados de lista de permitidos Almacenamiento de objetos en caché Tiempo de vida mínimo Tiempo de vida máximo Tiempo de vida (TTL) predeterminado Reenvío de cookies Cookies de lista de permitidos Reenvío de cadenas de consulta y almacenamiento en caché Lista de permitidos de cadenas de consulta Smooth Streaming Restringir el acceso del lector (usar URL firmadas o cookies firmadas) Firmantes de confianza Cuenta de AWSNúmeros de Comprimir objetos automáticamente Evento CloudFront ARN de la función Lambda Incluir cuerpo Configuración del comportamiento de la caché Al configurar el comportamiento de la caché, puede configurar una amplia variedad de funcionalidades de CloudFront para un determinado patrón de ruta de URL de archivos en su sitio web. Por ejemplo, un comportamiento de la caché puede ser aplicable a todos los archivos .jpg del directorio images del servidor web que se esté utilizando como servidor de origen para CloudFront. La funcionalidad que puede definir para cada comportamiento de la caché incluye: El patrón de ruta. Si ha configurado varios orígenes para su distribución de CloudFront, el origen al que desea que CloudFront reenvíe sus solicitudes. Si enviar cadenas de consulta a su origen. Si acceder a los archivos especificados requiere URL firmadas. Si exigir a los usuarios que utilicen HTTPS para obtener acceso a los archivos. El tiempo mínimo que dichos archivos se mantienen en la caché de CloudFront independientemente del valor de los encabezados Cache-Control que el origen agregue a los archivos. Al crear una nueva distribución, debe especificar la configuración del comportamiento de la caché predeterminado, que reenvía automáticamente todas las solicitudes al origen que especifique al crear la distribución. Después de crear una distribución, puede crear más comportamientos de la caché que definen cómo CloudFront responde cuando recibe una solicitud de objetos que coincide con un patrón de ruta, por ejemplo, *.jpg . Si crea más comportamientos de la caché, el predeterminado será siempre el último en procesarse. Los demás comportamientos de la caché se procesan en el orden en que aparecen en la consola de CloudFront o, si está utilizando la API de CloudFront, en el orden en que se enumeran en el elemento DistributionConfig de la distribución. Para obtener más información, consulte Patrón de ruta . Al crear un comportamiento de la caché, debe especificar el origen desde el que desea que CloudFront obtenga objetos. Por lo tanto, si desea que CloudFront distribuya objetos de todos los orígenes, debe crear al menos tantos comportamientos de la caché (incluido el predeterminado) como orígenes tenga. Por ejemplo, si tiene dos orígenes y solo el comportamiento de la caché predeterminado, este hace que CloudFront obtenga objetos desde uno de los orígenes, pero el otro origen no se usa jamás. Para obtener información sobre el número máximo actual de comportamientos de la caché que puede agregar a una distribución o para solicitar una cuota (antes denominada límite) más alta, consulte Cuotas generales de distribuciones . Temas Patrón de ruta Origen o grupo de origen Política de protocolo para lectores Métodos HTTP permitidos Configuración de cifrado de nivel de campo Métodos HTTP almacenados en caché Cómo permitir solicitudes de gRPC a través de HTTP/2 Caché en función de encabezados de solicitud seleccionados Encabezados de lista de permitidos Almacenamiento de objetos en caché Tiempo de vida mínimo Tiempo de vida máximo Tiempo de vida (TTL) predeterminado Reenvío de cookies Cookies de lista de permitidos Reenvío de cadenas de consulta y almacenamiento en caché Lista de permitidos de cadenas de consulta Smooth Streaming Restringir el acceso del lector (usar URL firmadas o cookies firmadas) Firmantes de confianza Cuenta de AWSNúmeros de Comprimir objetos automáticamente Evento CloudFront ARN de la función Lambda Incluir cuerpo Patrón de ruta El patrón de ruta (por ejemplo, images/*.jpg ) que especifica a qué solicitudes desea que sea aplicable este comportamiento de la caché. Cuando CloudFront recibe una solicitud de un usuario final, la ruta solicitada se compara con patrones de ruta en el orden en el que se enumeran los comportamientos de la caché en la distribución. La primera coincidencia determina el comportamiento de la caché que se aplicará a dicha solicitud. Por ejemplo, suponga que tiene tres comportamientos de la caché con los siguientes tres patrones de ruta, en este orden: images/*.jpg images/* *.gif nota De forma opcional, puede incluir una barra inclinada (/) al principio de la ruta de acceso, por ejemplo, /images/*.jpg . El comportamiento de CloudFront es el mismo con o sin la / al principio. Si no especifica “/” al principio de la ruta, este carácter se deduce de forma automática; CloudFront trata la ruta igual con el carácter “/” inicial o sin él. Por ejemplo, CloudFront trata /*product.jpg igual que *product.jpg . Una solicitud del archivo images/sample.gif no satisface el primer patrón de ruta, por lo que los comportamientos de la caché asociados no se aplicarán a la solicitud. El archivo satisface el segundo patrón de ruta, por lo que los comportamientos de la caché asociados al segundo patrón de ruta se aplican a pesar de que la solicitud también coincide con el tercer patrón de ruta. nota Al crear una nueva distribución, el valor de Path Pattern (Patrón de ruta) del comportamiento de la caché predeterminado se establece como * (todos los archivos) y no puede modificarse. Este valor hace que CloudFront reenvíe todas las solicitudes de los objetos al origen que ha especificado en el campo Dominio de origen . Si la solicitud de un objeto no coincide con el patrón de ruta de ningún otro comportamiento de la caché, CloudFront aplica el comportamiento que especifique al comportamiento de la caché predeterminado. importante Defina los patrones de ruta y su orden detenidamente para evitar que los usuarios puedan acceder a contenido al que no desea otorgar acceso. Supongamos que una solicitud coincide con el patrón de ruta de dos comportamientos de la caché. El primer comportamiento de la caché no requiere URL firmadas ni cookies firmadas y el segundo requiere URL firmadas. Los usuarios pueden tener acceso a los objetos sin usar una URL firmada porque CloudFront procesa el comportamiento de la caché asociado a la primera coincidencia. Si trabaja con un canal de MediaPackage, debe incluir patrones de ruta específicos para el comportamiento de la caché que se haya definido para el tipo de punto de enlace del origen. Por ejemplo, en el caso de un punto de enlace DASH, debería escribir *.mpd para Path Pattern (Patrón de ruta) . Para obtener más información e instrucciones específicas, consulte Distribución de vídeo en directo formateado con AWS Elemental MediaPackage . La ruta especificada es aplicable a las solicitudes de todos los archivos del directorio especificado y sus subdirectorios. CloudFront no tiene en cuenta las cadenas de consulta ni cookies a la hora de evaluar el patrón de ruta. Por ejemplo, si un directorio images contiene subdirectorios product1 y product2 , el patrón de ruta images/*.jpg resulta aplicable a las solicitudes de cualquier archivo .jpg en los directorios images , images/product1 y images/product2 . Si desea aplicar un comportamiento de la caché a los archivos del directorio images/product1 que sea distinto al comportamiento a aplicar a los archivos de los directorios images y images/product2 , cree un comportamiento de la caché independiente para images/product1 y muévalo a la posición superior (previa) a la del comportamiento de la caché para el directorio images . Puede utilizar los siguientes caracteres comodín en el patrón de ruta: * coincide con 0 o más caracteres. ? coincide exactamente con 1 carácter. Los siguientes ejemplos muestran cómo funcionan los caracteres comodín: Patrón de ruta Archivos que coinciden con el patrón de ruta *.jpg Todos los archivos .jpg. images/*.jpg Todos los archivos .jpg del directorio images y de los subdirectorios de images . a*.jpg Todos los archivos .jpg cuyos nombre de archivo comienzan por a , por ejemplo, apple.jpg y appalachian_trail_2012_05_21.jpg . Todos los archivos .jpg cuyas rutas de archivo comienzan por a , por ejemplo, abra/cadabra/magic.jpg . a??.jpg Todos los archivos .jpg cuyos nombres de archivo comienzan por a y que les siguen exactamente dos caracteres, por ejemplo, ant.jpg y abe.jpg . *.doc* Todos los archivos cuyas extensiones de nombre de archivo comienzan por .doc , por ejemplo, archivos .doc , .docx y .docm . En este caso no se puede utilizar el patrón de ruta *.doc? , ya que no sería aplicable a las solicitudes de archivos .doc ; el comodín ? sustituye exactamente un carácter. La longitud máxima de un patrón de ruta es 255 caracteres. El valor puede contener cualquiera de los siguientes caracteres: A-Z, a-z Los patrones de ruta distinguen entre mayúsculas y minúsculas, por lo que el patrón de ruta *.jpg no sería aplicable al archivo LOGO.JPG . 0-9 _ - . * $ / ~ " ' @ : + &, pasado y devuelto como &amp; Normalización de rutas CloudFront normaliza las rutas de URI de acuerdo con la RFC 3986 y, a continuación, hace coincidir la ruta con el comportamiento de caché correcto. Una vez que coincide el comportamiento de caché, CloudFront envía la ruta de URI sin procesar al origen. Si no coinciden, las solicitudes se ajustan al comportamiento de caché predeterminado. Algunos caracteres se normalizan y se eliminan de la ruta, como las barras diagonales múltiples ( // ) o los puntos ( .. ). Esto puede alterar la URL que CloudFront utiliza para que coincida con el comportamiento de caché previsto. ejemplo Ejemplo Especifique las rutas /a/b* y /a* del comportamiento de caché. Un lector que envíe la ruta /a/b?c=1 coincidirá con el comportamiento de caché /a/b* . Un lector que envíe la ruta /a/b/..?c=1 coincidirá con el comportamiento de caché /a* . Para evitar que las rutas se normalicen, puede actualizar las rutas de solicitud o el patrón de ruta del comportamiento de caché. Origen o grupo de origen Esta configuración solo se aplica cuando se crea o actualiza un comportamiento de caché para una distribución existente. Especifique el valor de un origen o grupo de origen existente. Este identifica el origen o el grupo de orígenes al que desea que CloudFront dirija solicitudes cuando una solicitud (como https://example.com/logo.jpg) coincide con el patrón de ruta para un comportamiento de caché (como *.jpg) o para el comportamiento de caché predeterminado (*). Política de protocolo para lectores Elija la política de protocolo que desea que los lectores utilicen para acceder a su contenido en las ubicaciones de borde de CloudFront: HTTP and HTTPS (HTTP y HTTPS) : los espectadores pueden utilizar ambos protocolos. Redirect HTTP to HTTPS (Redireccionamiento de HTTP a HTTPS) : los espectadores pueden utilizar ambos protocolos, pero las solicitudes HTTP se redirigirán automáticamente a solicitudes HTTPS. HTTPS Only (Solo HTTPS) : los espectadores solo pueden obtener acceso a su contenido si utilizan HTTPS. Para obtener más información, consulte Exigencia de HTTPS para la comunicación entre lectores y CloudFront . Métodos HTTP permitidos Especifique los métodos HTTP que desea que CloudFront procese y reenvíe al origen: GET, HEAD : puede usar CloudFront solo para obtener los objetos desde su origen o para obtener encabezados de objeto. GET, HEAD, OPTIONS : puede utilizar CloudFront solo para obtener objetos del origen, obtener encabezados de objeto o recuperar una lista de las opciones admitidas por su servidor de origen. GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE : puede utilizar CloudFront para obtener, agregar, actualizar y eliminar objetos, así como para obtener encabezados de objeto. Además, puede realizar otras operaciones de POST como enviar datos desde un formulario web. nota Si utiliza gRPC en la carga de trabajo, debe seleccionar GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE . Las cargas de trabajo de gRPC requieren el método POST . Para obtener más información, consulte Uso de gRPC con distribuciones de CloudFront . CloudFront almacena en caché las respuestas a las solicitudes GET y HEAD y, de forma opcional, de las solicitudes OPTIONS . Las respuestas a las solicitudes OPTIONS se almacenan en caché de forma independiente de las respuestas a las solicitudes GET y HEAD (el método OPTIONS se incluye en la clave de caché para solicitudes OPTIONS ). CloudFront no almacena en caché las respuestas a las solicitudes que utilizan otros métodos. importante Si elige GET, HEAD, OPTIONS o GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE , seguramente necesite restringir el acceso al bucket de Amazon S3 o a su origen personalizado para que los usuarios no puedan realizar operaciones indeseadas. Los siguientes ejemplos explican cómo restringir el acceso: Si utiliza Amazon S3 como origen de la distribución : cree un control de acceso de origen de CloudFront para restringir el acceso a su contenido de Amazon S3 y conceda permisos al control de acceso de origen. Por ejemplo, si configura CloudFront para que acepte y reenvíe estos métodos solo porque desea utilizar PUT , deberá volver a configurar las políticas del bucket de Amazon S3 para gestionar las solicitudes DELETE de forma adecuada. Para obtener más información, consulte Restricción del acceso a un origen de Amazon S3 . Si utiliza un origen personalizado : configure el servidor de origen para gestionar todos los métodos. Por ejemplo, si configura CloudFront para que acepte y reenvíe estos métodos solo porque desea utilizar POST , deberá volver a configurar el servidor de origen para gestionar las solicitudes DELETE de forma adecuada. Configuración de cifrado de nivel de campo Si desea aplicar el cifrado en el nivel de campo en campos de datos específicos, elija una configuración de cifrado en el nivel de campo en la lista desplegable. Para obtener más información, consulte Uso del cifrado en el nivel de campo para ayudar a proteger la información confidencial . Métodos HTTP almacenados en caché Especifique si desea que CloudFront almacene en caché la respuesta de su origen cuando un lector envíe una solicitud OPTIONS . CloudFront siempre almacena en caché las respuesta a las solicitudes GET y HEAD . Cómo permitir solicitudes de gRPC a través de HTTP/2 Especifique si desea que su distribución permita solicitudes de gRPC. Para activar el gRPC, seleccione la siguiente configuración: En Métodos HTTP permitidos , seleccione los métodos GET, HEAD, OPTIONS, PUT, POST, PATCH, DELETE . gRPC requiere el método POST . Seleccione la casilla de verificación de gRPC que aparece después de seleccionar el método POST . En Versiones de HTTP compatibles , seleccione HTTP/2 . Para obtener más información, consulte Uso de gRPC con distribuciones de CloudFront . Caché en función de encabezados de solicitud seleccionados Especifique si desea que CloudFront almacene en caché objetos en función de los valores de los encabezados especificados: Ninguno (mejora el almacenamiento en caché) : CloudFront no almacena en caché los objetos en función de los valores de encabezado. Lista de permitidos : CloudFront almacena en caché los objetos solo según los valores de los encabezados especificados. Utilice Encabezados de la lista de permitidos para elegir los encabezados en los que desea que CloudFront base el almacenamiento en caché. Todos : CloudFront no almacena en caché los objetos que están asociados con este comportamiento de la caché. En su lugar, CloudFront envía todas las solicitudes al origen. (No se recomienda para los orígenes de Amazon S3). Independientemente de la opción que elija, CloudFront reenvía determinados encabezados a su origen y realiza acciones específicas en función de los encabezados que reenvíe. Para obtener más información acerca de cómo administra CloudFront el reenvío de encabezado, consulte Encabezados de solicitudes HTTP y comportamiento de CloudFront (personalizado y orígenes de Amazon S3) . Para obtener más información acerca de cómo configurar el almacenamiento en caché en CloudFront utilizando encabezados de solicitud, consulte Almacenamiento en caché de contenido en función de encabezados de solicitud . Encabezados de lista de permitidos Esta configuración solo se aplica cuando elige Lista de permisos en Caché basada en encabezados de solicitud seleccionados . Especifique los encabezados que desea que CloudFront tenga en cuenta a la hora de almacenar los objetos en caché. Seleccione los encabezados en la lista de encabezados disponibles y elija Add (Añadir) . Para reenviar un encabezado personalizado, escriba el nombre en el campo y elija Añadir personalizado . Para obtener información sobre el número máximo actual de encabezados que puede incluir en la lista de permitidos para cada comportamiento de la caché o para solicitar una cuota (antes denominada límite) más alta, consulte Cuotas en encabezados . Almacenamiento de objetos en caché Si su servidor de origen está agregando un encabezado Cache-Control a sus objetos para controlar el tiempo durante el cual deben mantenerse en la caché de CloudFront y no desea cambiar el valor de Cache-Control , elija Use Origin Cache Headers (Usar encabezados de caché de origen) . Para especificar el tiempo mínimo y máximo durante el cual los objetos deben mantenerse en la caché de CloudFront independientemente de los encabezados Cache-Control y un tiempo predeterminado durante el cual un objeto deberá mantenerse en la caché de CloudFront cuando le falte el encabezado Cache-Control , elija Customize (Personalizar) . A continuación, especifique los valores en los campos Minimum TTL (Tiempo de vida mínimo) , Default TTL (Tiempo de vida predeterminado) y Maximum TTL (Tiempo de vida máximo) . Para obtener más información, consulte Administración de cuánto tiempo se mantiene el contenido en una caché (vencimiento) . Tiempo de vida mínimo Especifique el tiempo mínimo, en segundos, que desea que los objetos permanezcan en la caché de CloudFront antes de que CloudFront envíe otra solicitud al origen para ver si el objeto se ha actualizado. aviso Si el TTL mínimo es superior a 0, CloudFront almacenará en caché el contenido al menos durante el tiempo especificado en el TTL mínimo de la política de caché, aunque estén presentes las directivas Cache-Control: no-cache , no-store o private en los encabezados de origen. Para obtener más información, consulte Administración de cuánto tiempo se mantiene el contenido en una caché (vencimiento) . Tiempo de vida máximo Especifique el tiempo máximo en segundos durante el cual desea que los objetos permanezcan en las cachés de CloudFront antes de que CloudFront consulte a su origen para determinar si el objeto se ha actualizado. El valor que especifique en Maximum TTL (Tiempo de vida máximo) será aplicable solo cuando el origen personalizado añada encabezados HTTP como Cache-Control max-age , Cache-Control s-maxage o Expires a los objetos. Para obtener más información, consulte Administración de cuánto tiempo se mantiene el contenido en una caché (vencimiento) . Para especificar un valor en Maximum TTL (Tiempo de vida máximo) , elija la opción Customize (Personalizar) en el ajuste Object Caching (Almacenamiento de objetos en caché) . El valor predeterminado de Maximum TTL (Tiempo de vida máximo) es 31 536 000 segundos (un año). Si cambia el valor de Minimum TTL (Tiempo de vida mínimo) o de Default TTL (Tiempo de vida predeterminado) a más de 31 536 000 segundos, el valor predeterminado de Maximum TTL (Tiempo de vida máximo) cambia al valor Default TTL (Tiempo de vida predeterminado) . Tiempo de vida (TTL) predeterminado Especifique el tiempo predeterminado, en segundos, durante el cual desea que los objetos permanezcan en las cachés de CloudFront antes de que CloudFront reenvíe otra solicitud a su origen para determinar si el objeto se ha actualizado. El valor que especifique en Periodo de vida predeterminado es aplicable solo cuando el origen no agrega encabezados HTTP como Cache-Control max-age , Cache-Control s-maxage o Expires a los objetos. Para obtener más información, consulte Administración de cuánto tiempo se mantiene el contenido en una caché (vencimiento) . Para especificar un valor en Default TTL (Tiempo de vida predeterminado) , elija la opción Customize (Personalizar) en el ajuste Object Caching (Almacenamiento de objetos en caché) . El valor predeterminado de Default TTL (Tiempo de vida predeterminado) es 86 400 segundos (un día). Si cambia el valor de Minimum TTL (Tiempo de vida mínimo) a más de 86 400 segundos, el valor predeterminado de Default TTL (Tiempo de vida predeterminado) cambia al valor Minimum TTL (Tiempo de vida mínimo) . Reenvío de cookies nota Para los orígenes de Amazon S3, esta opción solo se aplica a los buckets configurados como punto de conexión del sitio web. Especifique si desea que CloudFront reenvíe las cookies al servidor de origen y, en tal caso, cuáles de ellas. Si decide reenviar únicamente unas cookies determinadas (las contenidas en una lista de permitidos de cookies), escriba sus nombres en el campo Lista de permitidos de cookies . Si elige Todas , CloudFront reenvía todas las cookies independientemente de la cantidad que utilice la aplicación. Amazon S3 no procesa las cookies y reenviar cookies al origen reduce la capacidad de la caché. Para comportamientos de la caché que reenvíen solicitudes a un origen de Amazon S3, elija Ninguna en Reenviar cookies . Para obtener más información acerca del reenvío de cookies al origen, visite Almacenamiento en caché de contenido en función de cookies . Cookies de lista de permitidos nota Para los orígenes de Amazon S3, esta opción solo se aplica a los buckets configurados como punto de conexión del sitio web. Si eligió Lista de permitidos en la lista Reenviar cookies , escriba en el campo Lista de permitidos de cookies los nombres de las cookies que desea que CloudFront reenvíe a su servidor de origen para este comportamiento de la caché. Escriba una cookie por línea. Puede especificar los siguientes comodines para especificar nombres de cookies: * coincide con 0 más caracteres en el nombre de la cookie. ? coincide exactamente con un carácter en el nombre de la cookie Por ejemplo, supongamos que las solicitudes de un objeto enviadas por un espectador incluyen una cookie con el nombre: userid_ member-number Donde el valor de member-number es único para cada usuario. Desea que CloudFront almacene en caché una versión independiente del objeto por cada miembro. Podría conseguirlo reenviando todas las cookies al origen, pero las solicitudes de lectores incluyen algunas que no desea que CloudFront las almacene en caché. Otra opción sería especificar el siguiente valor como nombre de cookie, lo que haría que CloudFront reenviara al origen todas las cookies que comienzan por userid_ : userid_* Para obtener información sobre el número máximo actual de nombres de cookies que puede incluir en la lista de permitidos para cada comportamiento de la caché o para solicitar una cuota (antes denominada límite) más alta, consulte Cuotas en cookies (configuración de caché heredada) . Reenvío de cadenas de consulta y almacenamiento en caché CloudFront puede almacenar en caché diferentes versiones del contenido en función de los valores de los parámetros de las cadenas de consulta. Elija una de las siguientes opciones: Ninguno (mejora el almacenamiento en caché) Seleccione esta opción si el origen devuelve la misma versión de un objeto independientemente de los valores de los parámetros de las cadenas de consulta. Esto aumenta la probabilidad de que CloudFront pueda atender una solicitud de la caché, lo que mejora el rendimiento y reduce la carga en el origen. Reenviar todo y caché basada en lista de permitidos Seleccione esta opción si su servidor de origen devuelve distintas versiones de sus objetos en función de uno o más parámetros de cadenas de consulta. A continuación, especifique los parámetros que desee que CloudFront utilice como base para el almacenamiento en caché en el campo Lista de permitidos de cadenas de consulta . Reenviar todo y almacenar todo en caché Seleccione esta opción si su servidor de origen devuelve distintas versiones de sus objetos para todos los parámetros de cadenas de consulta. Para obtener más información acerca del almacenamiento en caché en función de los parámetros de las cadenas de consulta y acerca de formas de mejorar el desempeño, consulte Almacenamiento en caché de contenido en función de parámetros de cadenas de consulta . Lista de permitidos de cadenas de consulta Esta configuración se aplica solo cuando elige Reenviar todo y caché basada en lista de permitidos para Reenvío de cadenas de consulta y almacenamiento en caché . Puede especificar los parámetros de cadena de consulta que quiera que CloudFront utilice como base para el almacenamiento en caché. Smooth Streaming Elija Yes (Sí) si desea distribuir archivos multimedia en el formato Microsoft Smooth Streaming y no dispone de un servidor de IIS. Elija No si tiene un servidor Microsoft IIS que desea utilizar como origen para distribuir archivos multimedia en el formato Microsoft Smooth Streaming, o si no distribuye archivos multimedia Smooth Streaming. nota Si especifica Yes (Sí) , puede seguir distribuyendo otro tipo de contenido con este comportamiento de la caché si dicho contenido coincide con el valor de Path Pattern (Patrón de ruta) . Para obtener más información, consulte Configuración de vídeo bajo demanda para Microsoft Smooth Streaming . Restringir el acceso del lector (usar URL firmadas o cookies firmadas) Si desea que las solicitudes de objetos que coinciden con el valor de PathPattern en este comportamiento de la caché utilicen direcciones URL públicas, elija No . Si desea que las solicitudes de objetos que coinciden con el valor de PathPattern en este comportamiento de la caché utilicen direcciones URL firmadas, elija Yes (Sí) . A continuación, especifique las cuentas de AWS que desea utilizar para crear URL firmadas; a estas cuentas se les conoce como signatarios de confianza. Para obtener más información acerca de los signatarios de confianza, consulte Especificación de los signatarios que pueden crear URL firmadas y cookies firmadas . Firmantes de confianza Esta configuración solo se aplica cuando elige Sí en Restringir acceso al lector (Usar URL firmadas o cookies firmadas) . Seleccione las cuentas de AWS que desea utilizar como signatarios de confianza para este comportamiento de la caché: Self (Automático) : utilice la cuenta con la que tiene iniciada sesión en la Consola de administración de AWS como signatario de confianza. Si actualmente su sesión se inició como usuario de IAM, la cuenta de AWS asociada se agrega como signatario de confianza. Specify Accounts (Especificar cuentas): escriba los números de cuenta de los signatarios de confianza en el campo de AWSAccount Numbers (Números de cuenta) . Para crear URL firmadas, la cuenta de AWS debe tener al menos un par de claves activas de CloudFront. importante Si está actualizando una distribución que ya utiliza para distribuir contenido, añada signatarios de confianza solo cuando esté listo para comenzar a generar URL firmadas para los objetos. Después de añadir signatarios de confianza a una distribución, los usuarios deben utilizar las URL firmadas para obtener acceso a los objetos que coincidan con PathPattern para este comportamiento de la caché. Cuenta de AWSNúmeros de Esta configuración solo se aplica cuando elige Especificar cuentas en Firmantes de confianza . Si desea crear URL firmadas a través de cuentas de Cuentas de AWS además de, o en lugar de, hacerlo con la cuenta actual, ingrese un número de cuenta de Cuenta de AWS por línea en este campo. Tenga en cuenta lo siguiente: Las cuentas que especifique deben tener al menos un par de claves de CloudFront activo. Para obtener más información, consulte Creación de pares de claves para los firmantes . No puede crear pares de claves de CloudFront para usuarios de IAM, lo que significa que no puede utilizar usuarios de IAM como signatarios de confianza. Para obtener información acerca de cómo obtener el número de Cuenta de AWS para una cuenta, consulte Ver identificadores de Cuenta de AWS en la Guía de referencia de administración de Cuenta de AWS . Si escribe el número de la cuenta actual, CloudFront marca automáticamente la casilla Automático y elimina el número de cuenta de la lista AWS de Números de cuenta . Comprimir objetos automáticamente Si desea que CloudFront comprima automáticamente archivos de tipos determinados cuando los lectores admiten contenido comprimido, elija Sí . Cuando CloudFront comprime el contenido, las descargas son más veloces, ya que los archivos son más pequeños y las páginas web se muestran más rápido a los usuarios. Para obtener más información, consulte Ofrecimiento de archivos comprimidos . Evento CloudFront Esta configuración se aplica a las Asociaciones de funciones de Lambda. Puede elegir ejecutar una función de Lambda cuando se produzcan uno o varios de los siguientes eventos de CloudFront: Cuando CloudFront reciba una solicitud de un espectador (solicitud del espectador) Antes de que CloudFront reenvíe una solicitud al origen (solicitud al origen) Cuando CloudFront reciba una respuesta del origen (respuesta del origen) Antes de que CloudFront devuelva la respuesta al espectador (respuesta al espectador) Para obtener más información, consulte Elección del evento para desencadenar la función . ARN de la función Lambda Esta configuración se aplica a las Asociaciones de funciones de Lambda. Especifique el nombre de recurso de Amazon (ARN) de la función de Lambda para la que desea agregar un desencadenador. Para obtener información sobre cómo obtener el ARN de una función, consulte el paso 1 del procedimiento Agregar desencadenadores mediante la consola de CloudFront . Incluir cuerpo Esta configuración se aplica a las Asociaciones de funciones de Lambda. Para obtener más información, consulte Incluir cuerpo . JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento Configuración de origen Ajustes de la distribución ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:30:35 |
https://support.microsoft.com/it-it/windows/gestire-i-cookie-in-microsoft-edge-visualizzare-consentire-bloccare-eliminare-e-usare-168dab11-0753-043d-7c16-ede5947fc64d | Gestire i cookie in Microsoft Edge: visualizzare, consentire, bloccare, eliminare e usare - Supporto tecnico Microsoft Argomenti correlati × Sicurezza, protezione e privacy di Windows Panoramica Panoramica sulla sicurezza, sulla protezione e sulla privacy sicurezza di Windows Assistenza per la sicurezza di Windows Protezione con Sicurezza di Windows Prima di riciclare, vendere o regalare la propria Xbox o PC Windows Rimozione di malware dal PC Windows Protezione di Windows Assistenza per la protezione di Windows Visualizzazione ed eliminazione della cronologia del browser in Microsoft Edge Eliminazione e gestione dei cookie Rimozione sicura dei contenuti importanti durante la reinstallazione di Windows Ricerca e blocco di un dispositivo Windows perso Privacy di Windows Assistenza con la privacy di Windows Impostazioni di Privacy di Windows utilizzate dalle app Visualizza i tuoi dati nel dashboard per la privacy Passa a contenuti principali Microsoft Supporto Supporto Supporto Home Microsoft 365 Office Prodotti Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows altro ... Dispositivi Surface Accessori per PC Xbox Giochi per PC HoloLens Surface Hub Garanzie hardware Account e fatturazione account Microsoft Store e fatturazione Risorse Novità Forum della community Amministratori Microsoft 365 Portale per piccole imprese Sviluppatore Istruzione Segnala una truffa di supporto Sicurezza del prodotto Espandi Acquista Microsoft 365 Tutti i siti Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Offerte Aziende Supporto tecnico Software Software App di Windows IA OneDrive Outlook Passaggio da Skype a Teams OneNote Microsoft Teams Accessori e dispositivi Accessori e dispositivi Acquista i prodotti Xbox Accessori Intrattenimento Intrattenimento Xbox Game Pass Ultimate Xbox e giochi Giochi per PC Business Business Microsoft Security Azure Dynamics 365 Microsoft 365 per le aziende Microsoft Industry Microsoft Power Platform Windows 365 Sviluppatori e IT Sviluppatori e IT Sviluppatore Microsoft Microsoft Learn Supporto per le app del marketplace di IA Microsoft Tech Community Microsoft Marketplace Visual Studio Marketplace Rewards Altro Altro Microsoft Rewards Download gratuiti e sicurezza Formazione Carte regalo Licensing Visualizza mappa del sito Cerca Richiedi assistenza Nessun risultato Annulla Accedi Accedi con Microsoft Accedi o crea un account. Salve, Seleziona un altro account. Hai più account Scegli l'account con cui vuoi accedere. Argomenti correlati Sicurezza, protezione e privacy di Windows Panoramica Panoramica sulla sicurezza, sulla protezione e sulla privacy sicurezza di Windows Assistenza per la sicurezza di Windows Protezione con Sicurezza di Windows Prima di riciclare, vendere o regalare la propria Xbox o PC Windows Rimozione di malware dal PC Windows Protezione di Windows Assistenza per la protezione di Windows Visualizzazione ed eliminazione della cronologia del browser in Microsoft Edge Eliminazione e gestione dei cookie Rimozione sicura dei contenuti importanti durante la reinstallazione di Windows Ricerca e blocco di un dispositivo Windows perso Privacy di Windows Assistenza con la privacy di Windows Impostazioni di Privacy di Windows utilizzate dalle app Visualizza i tuoi dati nel dashboard per la privacy Gestire i cookie in Microsoft Edge: visualizzare, consentire, bloccare, eliminare e usare Si applica a Windows 10 Windows 11 Microsoft Edge I cookie sono piccoli dati archiviati nel dispositivo dai siti Web visitati. Servono a vari scopi, ad esempio memorizzare le credenziali di accesso, le preferenze del sito e tenere traccia del comportamento degli utenti. Tuttavia, potresti voler eliminare i cookie per motivi di privacy o per risolvere i problemi di esplorazione. Questo articolo fornisce istruzioni su come: Visualizza tutti i cookie Consenti tutti i cookie Consentire i cookie da un sito Web specifico Bloccare i cookie di terze parti Blocca tutti i cookie Bloccare i cookie da un sito specifico Elimina tutti i cookie Elimina i cookie da un sito specifico Elimina i cookie a ogni chiusura del browser Usare i cookie per precaricare la pagina per un'esplorazione più veloce Visualizza tutti i cookie Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Selezionare Cookie , quindi fare clic su Visualizza tutti i cookie e i dati del sito per visualizzare tutti i cookie archiviati e le informazioni relative al sito. Consenti tutti i cookie Consentendo i cookie, i siti Web saranno in grado di salvare e recuperare i dati nel tuo browser, in modo da migliorare la tua esperienza di esplorazione memorizzando le tue preferenze e le informazioni di accesso. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e abilita l'interruttore Consenti ai siti di salvare e leggere i dati dei cookie (scelta consigliata) per consentire tutti i cookie. Consentire i cookie da un sito specifico Consentendo i cookie, i siti Web saranno in grado di salvare e recuperare i dati nel tuo browser, in modo da migliorare la tua esperienza di esplorazione memorizzando le tue preferenze e le informazioni di accesso. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e vai a Consentito per salvare i cookie. Selezionare Aggiungi sito per consentire i cookie in base al sito immettendo l'URL del sito. Bloccare i cookie di terze parti Se non vuoi che i siti di terze parti archivino i cookie nel tuo PC, puoi bloccare i cookie. Questa operazione potrebbe tuttavia impedire la corretta visualizzazione di alcune pagine o causare un messaggio che indica che è necessario consentire i cookie per visualizzare il sito. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e abilita l'interruttore Blocca i cookie di terze parti. Blocca tutti i cookie Se non vuoi che i siti di terze parti archivino i cookie nel tuo PC, puoi bloccare i cookie. Questa operazione potrebbe tuttavia impedire la corretta visualizzazione di alcune pagine o causare un messaggio che indica che è necessario consentire i cookie per visualizzare il sito. Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e disabilita Consenti ai siti di salvare e leggere i dati dei cookie (scelta consigliata) per bloccare tutti i cookie. Bloccare i cookie da un sito specifico Microsoft Edge consente di bloccare i cookie da un sito specifico, tuttavia questa operazione potrebbe impedire la corretta visualizzazione di alcune pagine oppure potrebbe essere visualizzato un messaggio da un sito che informa che è necessario consentire i cookie per visualizzare tale sito. Per bloccare i cookie da un sito specifico: Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e vai a Non consentito salvare e leggere i cookie . Selezionare Aggiungi sito per bloccare i cookie in base ai singoli siti immettendo l'URL del sito. Elimina tutti i cookie Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cancella dati delle esplorazioni , quindi scegli gli elementi da cancellare accanto a Cancella dati delle esplorazioni ora . In Intervallo di tempo scegli un intervallo di tempo. Seleziona Cookie e altri dati del sito , quindi Cancella ora . Nota: In alternativa, è possibile eliminare i cookie premendo ctrl + MAIUSC + CANC insieme e procedendo con i passaggi 4 e 5. Tutti i cookie e gli altri dati del sito verranno eliminati per l'intervallo di tempo selezionato. Si disconnette dalla maggior parte dei siti. Elimina i cookie da un sito specifico Apri il browser Edge, seleziona Impostazioni e altro > Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie , quindi fai clic su Visualizza tutti i cookie e i dati del sito e cerca il sito di cui vuoi eliminare i cookie. Seleziona la freccia in giù a destra del sito di cui vuoi eliminare i cookie e seleziona Elimina . I cookie per il sito selezionato vengono eliminati. Ripetere questo passaggio per qualsiasi sito di cui si desidera eliminare i cookie. Elimina i cookie a ogni chiusura del browser Apri il browser Edge, seleziona Impostazioni e altro > Impostazioni > Privacy, ricerca e servizi . Seleziona Cancella dati delle esplorazioni , quindi scegli cosa cancellare ogni volta che chiudi il browser . Attivare l'interruttore Cookie e altri dati del sito . Una volta attivata questa funzionalità, ogni volta che chiudi il browser Edge tutti i cookie e gli altri dati del sito vengono eliminati. Si disconnette dalla maggior parte dei siti. Usare i cookie per precaricare la pagina per un'esplorazione più veloce Apri il browser Edge, seleziona Impostazioni e altro nell'angolo in alto a destra della finestra del browser. Seleziona Impostazioni > Privacy, ricerca e servizi . Seleziona Cookie e abilita l'interruttore Precarica pagine per velocizzare l'esplorazione e la ricerca. SOTTOSCRIVI FEED RSS Serve aiuto? Vuoi altre opzioni? Individua Community Contattaci Esplorare i vantaggi dell'abbonamento e i corsi di formazione, scoprire come proteggere il dispositivo e molto altro ancora. Vantaggi dell'abbonamento a Microsoft 365 Formazione su Microsoft 365 Microsoft Security Centro accessibilità Le community aiutano a porre e a rispondere alle domande, a fornire feedback e ad ascoltare gli esperti con approfondite conoscenze. Chiedi alla community Microsoft Microsoft Tech Community Partecipanti al Programma Windows Insider Partecipanti al Programma Insider di Microsoft 365 Trovare soluzioni ai problemi comuni o ottenere assistenza da un agente di supporto. Supporto online Queste informazioni sono risultate utili? Sì No Grazie! Altri feedback per Microsoft? Puoi aiutarci a migliorare? (Invia feedback a Microsoft per consentirci di aiutarti.) Come valuti la qualità della lingua? Cosa ha influito sulla tua esperienza? Il problema è stato risolto Cancella istruzioni Facile da seguire Nessun linguaggio gergale Immagini utili Qualità della traduzione Non adatto al mio schermo Istruzioni non corrette Troppo tecnico Informazioni insufficienti Immagini insufficienti Qualità della traduzione Altri commenti e suggerimenti? (Facoltativo) Invia feedback Premendo Inviare, il tuo feedback verrà usato per migliorare i prodotti e i servizi Microsoft. L'amministratore IT potrà raccogliere questi dati. Informativa sulla privacy. Grazie per il feedback! × Le novità Surface Pro Surface Laptop Surface Laptop Studio 2 Copilot per le organizzazioni Copilot per l'utilizzo personale Microsoft 365 Esplora i prodotti Microsoft App di Windows 11 Microsoft Store Profilo account Download Center Supporto Microsoft Store Resi Monitoraggio ordini Riciclaggio Garanzie commerciali Formazione Microsoft Education Dispositivi per l'istruzione Microsoft Teams per l'istruzione Microsoft 365 Education Office Education Formazione e sviluppo per gli insegnanti Offerte per studenti e genitori Azure per studenti Aziende Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft 365 Copilot Microsoft Teams Piccole imprese Sviluppatori e IT Sviluppatore Microsoft Microsoft Learn Supporto per le app del marketplace di IA Microsoft Tech Community Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Azienda Opportunità di carriera Informazioni su Microsoft Notizie aziendali Privacy in Microsoft Investitori Accessibilità Sostenibilità Italiano (Italia) Icona di rifiuto esplicito delle scelte di privacy Le tue scelte sulla privacy Icona di rifiuto esplicito delle scelte di privacy Le tue scelte sulla privacy Privacy per l'integrità dei consumer Riferimenti societari Contatta Microsoft Privacy Gestisci i cookie Condizioni per l'utilizzo Marchi Informazioni sulle inserzioni EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
https://penneo.com/da/use-cases/eidas-compliance/ | Skab tillid med e-underskrifter i overensstemmelse med eIDAS Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Skab tillid med e-signatur der overholder eIDAS Få sikkerhed for, at dine e-signaturer er juridisk bindende og gyldige i hele EU. Med Penneo kan du nemt underskrive med en avanceret elektronisk underskrift (AdES) ved brug af MitID – eller med en kvalificeret elektronisk underskrift (QES) via itsme® eller pas. BOOK ET MØDE Hvorfor vælge Penneo? Underskriv dokumenter online sikkert og effektivt Dokumenter, du kan stole på – også efter underskrift Avancerede og kvalificerede elektroniske underskrifter bygger på PKI-teknologi, som beskytter dokumentets integritet og forhindrer ændringer efter underskrivning. Underskriften havner hos den rette person Underskrifterne er personligt knyttet til underskriveren . Du kan identificere, hvem der har skrevet under – så du undgår tvivl om, hvem der har godkendt hvad. Stærk dokumentation, hvis underskriften bliver betvivlet Avancerede og kvalificerede elektroniske underskrifter gør det nærmest umuligt for nogen at benægte, at de har underskrevet . Det giver dig solid juridisk sikkerhed, hvis der opstår uenighed. 3.000+ virksomheder – herunder de fire største revisionshuse – bruger Penneo. 60 % af alle dokumenter, der sendes via Penneo, bliver underskrevet inden for 24 timer. 81 % af alle årsrapporter i Danmark bliver underskrevet med Penneo. Hvilke typer elektroniske underskrifter findes der ifølge eIDAS? eIDAS-forordningen opererer med tre typer elektroniske underskrifter – hver med forskellig grad af juridisk gyldighed og sikkerhed: Simpel elektronisk signatur (SES): En simpel elektronisk signatur er juridisk gyldig, men giver kun et lavt niveau af sikkerhed i forhold til, hvem der har underskrevet, og om dokumentet er blevet ændret efterfølgende. Avanceret elektronisk signatur (AdES): En avanceret elektronisk signatur giver en højere grad af sikkerhed og juridisk værdi end en simpel underskrift. Den er typisk knyttet til et digitalt certifikat og gør det muligt at identificere underskriveren og sikre, at dokumentet ikke er blevet ændret efter underskrivning. Kvalificeret elektronisk signatur (QES): En kvalificeret elektronisk signatur har samme retsvirkning som en håndskrevet underskrift i hele EU. Den tilbyder det højeste niveau af juridisk sikkerhed og kræver, at underskriveren anvender en kvalificeret signeringsløsning udstedt af en godkendt udbyder (QTSP). Læs mere Se, hvorfor dokumenter bliver underskrevet på under 24 timer med Penneo Underskriv selv et testdokument. Oplev hvor nemt det er og se, hvorfor det hjælper dine underskrivere med at færdiggøre dokumenter hurtigere. X/Twitter Dette felt er til validering og bør ikke ændres. E-mail * Jeg vil gerne modtage nyheder om Penneo og dets produkter. Jeg kan til enhver tid afmelde mig. Dette felt er skjult, når du får vist formularen Country Åland Islands Albania Andorra Australia Austria Belarus Belgium Bosnia and Herzegovina Bulgaria Canada China Croatia Cyprus Czech Republic Denmark Estonia Faroe Islands Finland France Georgia Germany Greece Greenland Hong Kong Hungary Iceland India Indonesia Ireland Isle of Man Israel Italy Japan Latvia Liechtenstein Lithuania Luxembourg Macau Macedonia Moldova Monaco Montenegro Netherlands New Zealand Norway Poland Portugal Romania Russia San Marino Serbia Singapore Slovakia Slovenia Spain Sweden Switzerland Taiwan Turkey Ukraine United Kingdom United States Land Skabt til selv de mest komplekse underskriftsprocesser Uanset om du arbejder med revision, regnskab, ejendomshandel, finans eller HR, gør Penneo det nemt og sikkert for dit team at håndtere underskrifter digitalt. Platformen er udviklet til at automatisere selv de mest komplekse forløb, så I kan fokusere på det, der virkelig tæller. Revision og regnskab Send aftalebreve, revisionspåtegninger og årsrapporter til underskrift med få klik. Læs mere Ejendomshandel Gør ejendomshandler hurtigere og nemmere ved at fjerne behovet for fysiske møder og papirarbejde. Læs mere Juridisk sektor Lad dine klienter underskrive dokumenter på afstand med sikre digitale signaturer, der overholder eIDAS-forordningen. Læs mere Finans og bank Reducer papirarbejde og manuelle processer – uden at gå på kompromis med en smidig og professionel kundeoplevelse. Læs mere HR og rekruttering Forkort ansættelsesprocessen ved at sende ansættelseskontrakter til digital underskrift på få minutter. Læs mere Tryghed med en kvalificeret tillidstjenesteudbyder Penneo er godkendt som kvalificeret tillidstjenesteudbyder (QTSP) og er officielt opført på EU’s liste over godkendte udbydere. Denne status bekræfter, at Penneo lever op til eIDAS-forordningens strenge krav og har tilladelse til at levere kvalificerede tillidstjenester – hvilket sikrer den højeste grad af sikkerhed og juridisk gyldighed ved digitale transaktioner i hele EU. Penneo giver dig mulighed for at oprette kvalificerede elektroniske underskrifter (QES) via itsme®, norsk BankID, .beID eller dit pas, samt avancerede elektroniske underskrifter med MitID, MitID Erhverv eller svensk BankID. Lær mer DEAS sparer 385 timer om måneden med Penneo Vi har over 3.000 kontrakter, der kræver underskrifter hver måned. Med Penneos digitale signaturløsning har vi sparet 385 arbejdstimer om måneden. Som resultat kan vores medarbejdere nu fokusere på at servicere vores kunder og lejere bedre – i stedet for at bruge tid på manuelle, administrative opgaver. — Thomas B. Skræddergaard, IT-chef hos DEAS Læs mere om samarbejdet Arbejd hurtigere med integrationer og åben API Forbind dine systemer på få minutter. Med vores integrationer og åbne API kan du automatisere arbejdsgange og få mere fra hånden. Få overblik over alle integrationer Ofte stillede spørgsmål Er e-signaturer oprettet via Penneo juridisk bindende? Ja, e-signaturer oprettet via Penneo er juridisk bindende. Penneo understøtter både avancerede elektroniske signaturer (AdES) og kvalificerede elektroniske signaturer (QES) i overensstemmelse med eIDAS-forordningen (EU nr. 910/2014). Tilbyder Penneo kvalificerede elektroniske signaturer (QES)? Ja, Penneo tilbyder kvalificerede elektroniske signaturer via pas, norsk BankID, itsme® og. beID. Disse signaturer har samme retsgyldighed som en håndskrevet underskrift i hele EU. Tilbyder Penneo avancerede elektroniske signaturer (AdES)? Ja, Penneo gør det muligt at oprette avancerede elektroniske signaturer med MitID, MitID Erhverv og svensk BankID. Disse signaturer er unikt knyttet til underskriveren og beskytter dokumentet mod ændringer. Hvordan sikrer jeg, at en e-signatur er gyldig? Du kan verificere en elektronisk signaturs gyldighed på flere måder: Åbn dokumentet i en PDF-læser og brug det indbyggede valideringsværktøj Upload dokumentet til Penneo Validator Upload dokumentet til EU-Kommissionens valideringsplatform Læs mere om validering af Penneo-signaturer Hvad er forskellen på en simpel e-signatur og en digital signatur? En simpel e-signatur (SES) kan være så enkel som at indtaste et navn eller klikke på en knap. Det er nemt, men giver kun begrænset sikkerhed og retsgyldighed. En digital signatur – enten avanceret eller kvalificeret – giver langt højere sikkerhed og juridisk vægt i henhold til eIDAS-forordningen. Læs mere om forskellen på digitale og elektroniske signaturer Hvad koster Penneo? Penneo tilbyder fleksible prismodeller, der tager højde for din organisations behov. Se vores priser og find den løsning, der passer til jer . Bliv klogere på elektroniske underskrifter Ny til digitale underskrifter? Overvej disse 9 punkter først Læs mere Hvad betyder eIDAS 2.0 for digitale transaktioner? Læs mere eIDAS-forordningen: Elektronisk identifikation og tillidstjenester i EU Læs mere Se hvad du kan opnå med Penneo BOOK ET MØDE Se hvordan det fungerer Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:35 |
https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.instagram.com%2F&h=AT0SRopiwiXdYBNVCMT1EPyYFDsz2tuM3TqZnbtTMPFwh9TN33dAzxUCRGD49jHvZjanDa0pzdOjLj-Mrl7FIXvnTepbifX78JMjFXgbbfWGcrVtYBp50qnwY7DYy60Yq4iaieatl6d0ti8g | Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026 | 2026-01-13T09:30:35 |
https://support.microsoft.com/en-au/microsoft-edge/microsoft-edge-browsing-data-and-privacy-bb8174ba-9d73-dcf2-9b4a-c582b4e640dd | Microsoft Edge, browsing data, and privacy - Microsoft Support Skip to main content Microsoft Support Support Support Home Microsoft 365 Office Products Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows more ... Devices Surface PC Accessories Xbox PC Gaming HoloLens Surface Hub Hardware warranties Account & billing Account Microsoft Store & billing Resources What's new Community forums Microsoft 365 Admins Small Business Portal Developer Education Report a support scam Product safety More Buy Microsoft 365 All Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Deals Small Business Support Software Software Windows Apps AI OneDrive Outlook Moving from Skype to Teams OneNote Microsoft Teams PCs & Devices PCs & Devices Shop Xbox Accessories Entertainment Entertainment Xbox Game Pass Ultimate Xbox games PC games Business Business Microsoft Security Azure Dynamics 365 Microsoft 365 for business Microsoft Industry Microsoft Power Platform Windows 365 Developer & IT Developer & IT Microsoft Developer Microsoft Learn Support for AI marketplace apps Microsoft Tech Community Microsoft Marketplace Visual Studio Marketplace Rewards Other Other Microsoft Rewards Free downloads & security Education Gift cards View Sitemap Search Search for help No results Cancel Sign in Sign in with Microsoft Sign in or create an account. Hello, Select a different account. You have multiple accounts Choose the account you want to sign in with. Microsoft Edge, browsing data, and privacy Applies To Privacy Microsoft Edge Windows 10 Windows 11 Microsoft Edge helps you browse, search, shop online, and more. Like all modern browsers, Microsoft Edge lets you collect and store specific data on your device, like cookies, and lets you send information to us, like browsing history, to make the experience as rich, fast, and personal as possible. Whenever we collect data, we want to make sure it's the right choice for you. Some people worry about their web browsing history being collected. That's why we tell you what data is stored on your device or collected by us. We give you choices to control what data gets collected. For more information about privacy in Microsoft Edge, we recommend reviewing our Privacy Statement . What data is collected or stored, and why Microsoft uses diagnostic data to improve our products and services. We use this data to better understand how our products are performing and where improvements need to be made. Microsoft Edge collects a set of required diagnostic data to keep Microsoft Edge secure, up to date and performing as expected. Microsoft believes in and practices information collection minimization. We strive to gather only the info we need, and to store it only for as long as it's needed to provide a service or for analysis. In addition, you can control whether optional diagnostic data associated with your device is shared with Microsoft to solve product issues and help improve Microsoft products and services. As you use features and services in Microsoft Edge, diagnostic data about how you use those features is sent to Microsoft. Microsoft Edge saves your browsing history—information about websites you visit—on your device. Depending on your settings, this browsing history is sent to Microsoft, which helps us find and fix problems and improve our products and services for all users. You can manage the collection of optional diagnostic data in the browser by selecting Settings and more > Settings > Privacy, search, and services > Privacy and turning on or off Send optional diagnostic data to improve Microsoft products . This includes data from testing new experiences. To finish making changes to this setting, restart Microsoft Edge. Turning this setting on allows this optional diagnostic data to be shared with Microsoft from other applications using Microsoft Edge, such as a video streaming app that hosts the Microsoft Edge web platform to stream the video. The Microsoft Edge web platform will send info about how you use the web platform and sites you visit in the application to Microsoft. This data collection is determined by your optional diagnostic data setting in Privacy, search, and services settings in Microsoft Edge. On Windows 10, these settings are determined by your Windows diagnostic setting. To change your diagnostic data setting, select Start > Settings > Privacy > Diagnostics & feedback . As of March 6th 2024, Microsoft Edge diagnostic data is collected separately from Windows diagnostic data on Windows 10 (version 22H2 and newer) and Windows 11 (version 23H2 and newer) devices in the European Economic Area. For these Windows versions, and on all other platforms, you can change your settings in Microsoft Edge by selecting Settings and more > Settings > Privacy, search, and services . In some cases, your diagnostic data settings might be managed by your organization. When you're searching for something, Microsoft Edge can give suggestions about what you're searching for. To turn on this feature, select Settings and more > Settings > Privacy, search, and services > Search and connected experiences > Address bar and search > Search suggestions and filters , and turn on Show me search and site suggestions using my typed characters . As you start to type, the info you enter in the address bar is sent to your default search provider to give you immediate search and website suggestions. When you use InPrivate browsing or guest mode , Microsoft Edge collects some info about how you use the browser depending on your Windows diagnostic data setting or Microsoft Edge privacy settings, but automatic suggestions are turned off and info about websites you visit is not collected. Microsoft Edge will delete your browsing history, cookies, and site data, as well as passwords, addresses, and form data when you close all InPrivate windows. You can start a new InPrivate session by selecting Settings and more on a computer or Tabs on a mobile device. Microsoft Edge also has features to help you and your content stay safe online. Windows Defender SmartScreen automatically blocks websites and content downloads that are reported to be malicious. Windows Defender SmartScreen checks the address of the webpage you're visiting against a list of webpage addresses stored on your device that Microsoft believes to be legitimate. Addresses that aren't on your device's list and the addresses of files you're downloading will be sent to Microsoft and checked against a frequently updated list of webpages and downloads that have been reported to Microsoft as unsafe or suspicious. To speed up tedious tasks like filling out forms and entering passwords, Microsoft Edge can save info to help. If you choose to use those features, Microsoft Edge stores the info on your device. If you've turned on sync for form fill like addresses or passwords, this info will be sent to the Microsoft cloud and stored with your Microsoft account to be synced across all your signed-in versions of Microsoft Edge. You can manage this data from Settings and more > Settings > Profiles > Sync . To integrate your browsing experience with other activities you do on your device, Microsoft Edge shares your browsing history with Microsoft Windows through its Indexer. This information is stored locally on the device. It includes URLs, a category in which the URL might be relevant, such as "most visited", "recently visited", or "recently closed", and also a relative frequency or recency within each category. Websites you visit while in InPrivate mode will not be shared. This information is then available to other applications on the device, such as the start menu or taskbar. You can manage this feature by selecting Settings and more > Settings > Profiles , and turning on or off Share browsing data with other Windows features . If turned off, any previously shared data will be deleted. To protect some video and music content from being copied, some streaming websites store Digital Rights Management (DRM) data on your device, including a unique identifier (ID) and media licenses. When you go to one of these websites, it retrieves the DRM info to make sure you have permission to use the content. Microsoft Edge also stores cookies, small files that are put on your device as you browse the web. Many websites use cookies to store info about your preferences and settings, like saving the items in your shopping cart so you don't have to add them each time you visit. Some websites also use cookies to collect info about your online activity to show you interest-based advertising. Microsoft Edge gives you options to clear cookies and block websites from saving cookies in the future. Microsoft Edge will send Do Not Track requests to websites when the Send Do Not Track requests setting is turned on. This setting is available at Settings and more > Settings > Privacy, search, and services > Privacy > Send "Do Not Track" Requests. Websites may still track your activities even when a Do Not Track request is sent, however. How to clear data collected or stored by Microsoft Edge To clear browsing info stored on your device, like saved passwords or cookies: In Microsoft Edge, select Settings and more > Settings > Privacy, search, and services > Clear Browsing data . Select Choose what to clear next to Clear browsing data now. Under Time range , choose a time range. Select the check box next to each data type you'd like to clear, and then select Clear now . If you'd like, you can select Choose what to clear every time you close the browser and choose which data types should be cleared. Learn more about what gets deleted for each browser history item . To clear browsing history collected by Microsoft: To see your browsing history associated with your account, sign in to your account at account.microsoft.com . In addition, you also have the option of clearing your browsing data that Microsoft has collected using the Microsoft privacy dashboard . To delete your browsing history and other diagnostic data associated with your Windows 10 device, select Start > Settings > Privacy > Diagnostics & feedback , and then select Delete under Delete diagnostic data . To clear browsing history shared with other Microsoft features on the local device: In Microsoft Edge, select Settings and more > Settings > Profiles . Select Share browsing data with other Windows features . Toggle this setting to off . How to manage your privacy settings in Microsoft Edge To review and customize your privacy settings, select Settings and more > Settings > Privacy, search, and services . > Privacy. To learn more about privacy in Microsoft Edge, read the Microsoft Edge privacy whitepaper . SUBSCRIBE RSS FEEDS Need more help? Want more options? Discover Community Contact Us Explore subscription benefits, browse training courses, learn how to secure your device, and more. Microsoft 365 subscription benefits Microsoft 365 training Microsoft security Accessibility center Communities help you ask and answer questions, give feedback, and hear from experts with rich knowledge. Ask the Microsoft Community Microsoft Tech Community Windows Insiders Microsoft 365 Insiders Find solutions to common problems or get help from a support agent. Online support Was this information helpful? Yes No Thank you! Any more feedback for Microsoft? Can you help us improve? (Send feedback to Microsoft so we can help.) What affected your experience? Resolved my issue Clear instructions Easy to follow No jargon Pictures helped Other Didn't match my screen Incorrect instructions Too technical Not enough information Not enough pictures Other Any additional feedback? (Optional) Submit feedback By pressing submit, your feedback will be used to improve Microsoft products and services. Your IT admin will be able to collect this data. Privacy Statement. Thank you for your feedback! × What's new Surface Pro Surface Laptop Surface Laptop Studio 2 Copilot for organizations Copilot for personal use AI in Windows Explore Microsoft products Windows 11 apps Microsoft Store Account profile Download Center Microsoft Store Support Returns Order tracking Microsoft Store Promise Flexible Payments Education Microsoft in education Devices for education Microsoft Teams for Education Microsoft 365 Education Office Education Educator training and development Deals for students and parents Azure for students Business Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft 365 Copilot Microsoft Teams Small Business Developer & IT Microsoft Developer Microsoft Learn Support for AI marketplace apps Microsoft Tech Community Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Company Careers Company news Privacy at Microsoft Investors Sustainability English (Australia) Your Privacy Choices Opt-Out Icon Your Privacy Choices Your Privacy Choices Opt-Out Icon Your Privacy Choices Consumer Health Privacy Contact Microsoft Privacy Manage cookies Terms of use Trademarks Safety & eco About our ads Australian Consumer Law © Microsoft 2026 | 2026-01-13T09:30:35 |
https://vml.visma.ai/#products | Visma Machine Learning - Automate your processes About us Who are we? Learn more about Visma Machine Learning What we do Learn more about the products we offer What our customers say Read our customers' testimonials Products Autosuggest Automate your workflow Smartscan Extract data from your documents Resources Blog Product news and showcases Showcase of prototypes Prototypes, ideas and experiments Support Get started Find out more about onboarding to Visma Machine Learning FAQ We have answers to your frequently asked questions. Privacy policy Learn how we handle your data and protect your privacy Cookie policy Learn about how we use cookies to enhance your experience API Documentation Contact Us About us Who are we? What we do What our customers say Products Autosuggest Smartscan Resources Blog Showcase of prototypes Support Get started FAQ Privacy policy Cookie policy API Documentation Loading... Gærtorvet 1-5 1799 Copenhagen Denmark E-mail: worksmarter@visma.com About us Who are we? What we do What our customers say Products Autosuggest Smartscan Resources Blog Prototypes Documentation Support Get started FAQ Privacy policy Cookie policy Security Information Operating Status © 2026 Visma Group. All rights reserved Transforming the way people work Visma Machine Learning dramatically reduces the time spent on routine tasks, and lets our customers focus on the important things. Our products save people many hours of data entry into accounting systems which directly translates into cost savings for companies and helps them to become paperless. Contact us Try out Smartscan Let our numbers speak for us 0 + thousand companies use our ML 0 + million documents scanned per month 0 + million API requests per day 30+ SaaS products powered by our machine learning APIs We simplify and automate accounting workflows Data entry is one of the big time sinks in business processes. With Smartscan we're eliminating most of this work for document handling. With AutoSuggest we're dramatically simplifying the decisions you need to make in the accounting workflow. Leading machine learning solutions Industry leading OCR and Data Capture API for scanning and extracting information from invoices and receipts. Learn more The process toolkit for your workflows and tasks. Simple-to-use prediction engine for scanned invoices and bank transactions. Learn more Minimise the time spent processing documents and doing manual work Easy setup We make machine learning as simple as possible - available through simple JSON APIs. Best in class AI We use state of the art techniques to offer the best coverage on the market. Blazingly fast Our speed is a point of pride. Document scans complete in 1-2 seconds. Faster than most document AI APIs. See what our customers have to say Great Service For any new Dinero customers the machine learning predictions will work right away, and it will be magical. Our customers love it! Alexander Jasper Lead AI Engineer, Dinero Intuitive Implementation The implementation was a breeze thanks to your documentation. Marco Hokke Software Developer, Visma Raet Efficient Performance If we want to increase our customers automation level, that is where ML Assets become extremely important. Oliver Storm-Pallesen Product Line Manager, e-conomic Questions? Are you using a Visma product but you are missing Autosuggest or Smartscan capabilities? Talk to your sales contact, if you have one, and ask for Autosuggest or Smartscan. Email us at worksmarter@visma.com to let us know which product you have and we'll get things rolling. Contact us | 2026-01-13T09:30:35 |
https://vi-vn.facebook.com/login/?next=https%3A%2F%2Fwww.facebook.com%2Fshare_channel%2F%3Ftype%3Dreshare%26link%3Dhttps%253A%252F%252Fdev.to%252Fawscommunity-asean%252Fcountermeasure-against-cve-2021-44228-with-aws-waf-4n3d%26app_id%3D966242223397117%26source_surface%3Dexternal_reshare%26display%26hashtag | Facebook Khám phá những điều bạn yêu thích . Đăng nhập vào Facebook Email hoặc số di động Mật khẩu Đăng nhập Quên mật khẩu? Tạo tài khoản mới Tiếng Việt 한국어 English (US) Bahasa Indonesia ภาษาไทย Español 中文(简体) Ngôn ngữ khác... Đăng ký Đăng nhập Messenger Facebook Lite Video Meta Pay Cửa hàng trên Meta Meta Quest Ray-Ban Meta Meta AI Nội dung khác do Meta AI tạo Instagram Threads Trung tâm thông tin bỏ phiếu Chính sách quyền riêng tư Trung tâm quyền riêng tư Giới thiệu Tạo quảng cáo Tạo Trang Nhà phát triển Tuyển dụng Cookie Lựa chọn quảng cáo Điều khoản Trợ giúp Tải thông tin liên hệ lên & đối tượng không phải người dùng Meta © 2026 | 2026-01-13T09:30:35 |
https://support.microsoft.com/bg-bg/microsoft-edge/microsoft-edge-%D0%B4%D0%B0%D0%BD%D0%BD%D0%B8-%D0%BE%D1%82-%D1%81%D1%8A%D1%80%D1%84%D0%B8%D1%80%D0%B0%D0%BD%D0%B5%D1%82%D0%BE-%D0%BF%D0%BE%D0%B2%D0%B5%D1%80%D0%B8%D1%82%D0%B5%D0%BB%D0%BD%D0%BE%D1%81%D1%82-bb8174ba-9d73-dcf2-9b4a-c582b4e640dd | Microsoft Edge, данни от сърфирането, поверителност - Поддръжка на Microsoft Преминаване към основното съдържание Microsoft Поддръжка Поддръжка Поддръжка Начало Microsoft 365 Office Продукти Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows още... Устройства Surface Аксесоари за компютър Xbox Играене на компютър HoloLens Surface Hub Гаранция за хардуер Фактуриране на акаунт & Акаунт Microsoft Store и фактуриране Ресурси Какво е новото Форуми на общността Администратори на Microsoft 365 Портал за малки фирми Разработчик Образование Докладване на измама с поддръжката Безопасност на продукта Още Закупуване на Microsoft 365 Всичко на Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Поддръжка Софтуер Софтуер Приложения за Windows AI OneDrive Outlook OneNote Microsoft Teams компютри и устройства компютри и устройства Компютърни аксесоари Развлечения Развлечения Компютърни игри Бизнес Бизнес Microsoft Security Azure Dynamics 365 Microsoft 365 за бизнеса Microsoft Industry Microsoft Power Platform Windows 365 Разработчици и ИТ Разработчици и ИТ Разработчик на Microsoft Microsoft Learn Поддръжка за marketplace AI приложения Техническа общност на Microsoft Microsoft Marketplace Visual Studio Marketplace Rewards Други Други Безплатни изтегляния и защита Образование Преглед на картата на сайта Търсене Търсене на помощ Няма резултати Отказ Влизане Влизане с Microsoft Влезте или създайте акаунт. Здравейте, Изберете друг акаунт. Имате няколко акаунта Изберете акаунта, с който искате да влезете. Microsoft Edge, данни от сърфирането, поверителност Отнася се за Privacy Microsoft Edge Windows 10 Windows 11 Microsoft Edge ви помага да сърфирате, търсите, пазарувате онлайн и др. Както всички съвременни браузъри, Microsoft Edge Ви позволява да събирате и съхранявате специфични данни на вашето устройство, като например бисквитки, и ви позволява да ни изпращате информация, като например хронология на браузъра, за да направим вашето изживяване възможно най-пълноценно, бързо и персонализирано. Всеки път, когато събираме данни, искаме да се уверим, че това е правилният избор за Вас. Някои потребители се тревожат за събирането на данни за хронологията на техния уеб браузър. Ето защо Ви уведомяваме какви данни се съхраняват на Вашето устройство или се събират от нас. Даваме ви възможности за избор, за да контролирате кои данни да се събират. За повече информация относно поверителността в Microsoft Edge ви препоръчваме да прегледате нашата Декларация за поверителност . Какви данни се събират или съхраняват и защо Microsoft използва диагностични данни за подобряване на нашите продукти и услуги. Използваме тези данни, за да разберем по-добре как работят нашите продукти и къде трябва да бъдат направени подобрения. Microsoft Edge събира набор от задължителни диагностични данни, за да поддържа Microsoft Edge защитен, актуален и работещ по очаквания начин. Microsoft вярва в минималното събиране на информация и го практикува. Стремим се да събираме само информацията, от която се нуждаем, и да я съхраняваме само толкова дълго, колкото е необходимо за предоставяне на услуга или за анализ. Освен това можете да контролирате дали опционалните диагностични данни, свързани с вашето устройство, да се споделят с Microsoft, за да се разрешат проблеми с продукти и да се подобрят продуктите и услугите на Microsoft. Докато използвате функции и услуги в Microsoft Edge, диагностичните данни за начина, по който използвате тези функции, се изпращат до Microsoft. Microsoft Edge записва хронологията на браузъра ви – информация за уеб сайтовете, които посещавате – на вашето устройство. В зависимост от настройките ви тази хронология на браузъра се изпраща до Microsoft, което ни помага да откриваме и отстраняваме проблеми и да подобряваме нашите продукти и услуги за всички потребители. Можете да управлявате събирането на опционални диагностични данни в браузъра, като изберете Настройки и още > Настройки > Поверителност, търсене и услуги > Поверителност и включите или изключите Изпращане на незадължителни диагностични данни с цел подобряване на продуктите на Microsoft . Това включва данни от тестове на нови изживявания. За да завършите извършването на промени по тази настройка, извършете рестартиране на Microsoft Edge. Включването на тази настройка позволява тези незадължителни диагностични данни да се споделят с Microsoft от други приложения, използващи Microsoft Edge, като например приложение за поточно предаване на видео, което хоства уеб платформата на Microsoft Edge за поточно предаване на видеото. Уеб платформата Microsoft Edge ще изпраща информация за начина, по който използвате уеб платформата и сайтовете, които посещавате в приложението, на Microsoft. Това събиране на данни се определя от настройката ви за незадължителни диагностични данни в настройките за поверителност, търсене и услуги в Microsoft Edge. В Windows 10 тези настройки се определят от настройката за диагностика на Windows. За да промените настройката за диагностичните данни, изберете Старт настройки на>> Обратна връзка относно диагностиката> & поверителност . От 6 март 2024 г. диагностичните данни на Microsoft Edge се събират отделно от диагностичните данни на Windows на Windows 10 (версия 22H2 и по-нова) и Windows 11 (версия 23H2 и по-нови) устройства в Европейската икономическа зона. За тези версии на Windows и на всички други платформи можете да промените настройките си в Microsoft Edge, като изберете Настройки и още > Настройки > Поверителност, търсене и услуги . В някои случаи настройките за диагностични данни може да се управляват от вашата организация. Когато търсите нещо, Microsoft Edge може да ви дава предложения за това, което търсите. За да включите тази функция, изберете Настройки и още > Настройки > Поверителност, търсене и услуги > Търсене и свързани среди > Адресна лента и търсене > Предложения за търсене и филтри , и включете Показване на предложения за търсене и сайтове чрез въведените знаци . Когато започнете да въвеждате, въвежданата от вас информация в адресната лента се изпраща до вашия доставчик за търсене по подразбиране, за да ви се предоставят незабавно предложения за търсене и уеб сайтове. Когато използвате сърфиране InPrivate или режим на гост , Microsoft Edge събира информация за начина, по който използвате браузъра, в зависимост от настройката за диагностични данни на Windows или настройките за поверителност на Microsoft Edge, но автоматичните предложения са изключени и информацията за уеб сайтовете, които посещавате, не се събира. Microsoft Edge ще изтрие вашата хронология на браузъра, бисквитките и данните от сайт, както и вашите пароли, адреси и данни за формуляри, когато затворите всички прозорци за сърфиране InPrivate. Можете да стартирате нова inPrivate сесия, като изберете Настройки и още на компютър или Раздели на мобилно устройство. Microsoft Edge също така включва функции за осигуряване на вашата безопасност и безопасността на вашето съдържание онлайн. Windows Defender SmartScreen блокира автоматично уеб сайтове и изтегляния на съдържание, за които се съобщава, че са злонамерени. Windows Defender SmartScreen проверява адреса на уеб страницата, която посещавате, в списък с адреси на уеб страници, съхраняван на вашето устройство, които според Microsoft са легитимни. Адресите, които не са в списъка на устройството ви, и адресите на файловете, които изтегляте, се изпращат на Microsoft и се проверяват спрямо често актуализиран списък с уеб страници и изтегляния, които са били докладвани на Microsoft като опасни или подозрителни. За да ускори изпълнението на досадни задачи, като например попълване на формуляри и въвеждане на пароли, Microsoft Edge може да записва информация, за да Ви помогне. Ако решите да използвате тези функции, Microsoft Edge съхранява информацията на вашето устройство и не я изпраща на Microsoft. Ако сте включили синхронизирането за попълване на формуляри, като например адреси или пароли, тази информация ще се изпрати в облака на Microsoft и ще се съхрани с вашия акаунт в Microsoft, за да се синхронизира във всички ваши версии на Microsoft Edge, в които сте влезли. Можете да управлявате тези данни от Настройки и още > Настройки > профили > Синхронизиране . За да интегрира вашето сърфиране с други дейности, които правите на вашето устройство, Microsoft Edge споделя хронологията на браузъра с Microsoft Windows чрез своя индексатор. Тази информация се съхранява локално на устройството. Тя включва URL адреси, категория, в която URL адресът може да е подходящ, като например "най-често посещавани", "последно посетени" или "наскоро затворени", както и относителна честота или благонадеяност във всяка категория. Уеб сайтовете, които посещавате в режим InPrivate, няма да се споделят. Тази информация след това е налична за други приложения на устройството, като например менюто "Старт" или лентата на задачите. Можете да управлявате тази функция, като изберете Настройки и още > Настройки > Профили и включите или изключите Споделяне на данни от сърфирането с други функции на Windows . Ако е изключено, всички споделени по-рано данни ще бъдат изтрити. За да защитят определено видео съдържание или музикално съдържание от копиране, някои уеб сайтове за поточно предаване съхраняват данни за Управление на правата за достъп (DRM) на вашето устройство, включително еднозначен идентификатор (ИД) и лицензи за мултимедийно съдържание. Когато отидете на някой от тези уеб сайтове, той извлича информацията за DRM, за да се увери, че имате разрешение да използвате съдържанието. Microsoft Edge също така съхранява бисквитки – малки файлове, които се поставят на Вашето устройство, докато сърфирате в интернет. Много уеб сайтове използват бисквитки, за да съхраняват информация за вашите предпочитания и настройки, като например записване на елементите във вашата количка за пазаруване, за да не трябва да ги добавяте при всяко посещение. Някои уеб сайтове използват бисквитки и за събиране на информация за вашата онлайн дейност, за да ви показват реклами, базирани на интереси. Microsoft Edge Ви предоставя опции за изчистване на бисквитките и блокиране на записването на бисквитки от уеб сайтове в бъдеще. Microsoft Edge ще изпраща заявки за забрана на следенето до уеб сайтове, когато настройката Изпращане на заявки за забрана на следенето е включена. Тази настройка е налична в Настройки и още > Настройки > Поверителност, търсене и услуги > Поверителност > Изпращане на заявки за забрана на следенето. Уеб сайтовете обаче може да продължат да проследяват вашите дейности дори ако е изпратена заявка за забрана на следенето. Как да изчистите данни, събрани или съхранени от Microsoft Edge За да изчистите информацията за сърфирането, съхранена на вашето устройство, като например записани пароли или бисквитки: В Microsoft Edge изберете Настройки и още > Настройки > Поверителност, търсене и услуги > Изчистване на данни от сърфирането . Изберете Избор на елементите за изчистване до Изчистване на данните за сърфирането сега. Под Времеви диапазон изберете времеви диапазон. Поставете отметка в квадратчето до всеки тип данни, които искате да изчистите, след което изберете Изчистване сега . Ако искате, можете да изберете Избор на елементите за изчистване всеки път, когато затворите браузъра , и да изберете кои типове данни трябва да бъдат изчистени. Научете повече за съдържанието, което се изтрива за всеки елемент на хронологията на браузъра . За изчистване на хронология на браузъра, събирана от Microsoft: За да видите хронологията на браузъра, свързана с вашия акаунт, влезте в акаунта си на account.microsoft.com . Освен това имате възможност и да изчистите данните за сърфирането, които Microsoft е събрал чрез таблото за поверителност на Microsoft . За да изтриете хронологията на браузъра и други диагностични данни, свързани с вашето устройство с Windows 10, изберете Старт настройки > > Поверителност > Диагностика & обратна връзка , след което изберете Изтрий под Изтриване на диагностични данни . За да изчистите хронологията на браузъра, споделена с други функции на Microsoft на локалното устройство: В Microsoft Edge изберете Настройки и още > Настройки > профили . Изберете Споделяне на данни от сърфирането с други функции на Windows . Изключете тази настройка. Как да управлявате настройките за поверителност в Microsoft Edge За да прегледате и персонализирате настройките за поверителност, изберете Настройки и още > Настройки > Поверителност, търсене и услуги . > поверителност. За да научите повече относно поверителността в Microsoft Edge, прочетете Технически документ за поверителност на Microsoft Edge . АБОНИРАНЕ ЗА RSS КАНАЛИ Нуждаете ли се от още помощ? Искате ли още опции? Откриване Общност Свържете се с нас Разгледайте ползите от абонамента, прегледайте курсовете за обучение, научете как да защитите устройството си и още. Ползи от абонамент за Microsoft 365 Обучение за Microsoft 365 Microsoft Security Център за достъпност Общностите ви помагат да задавате и отговаряте на въпроси, да давате обратна връзка и да получавате информация от експерти с богати знания. Попитайте общността на Microsoft Техническа общност на Microsoft Участници в Windows Insider Участници в програмата Microsoft 365 Insider Намерете решения на често срещани проблеми или получете помощ от агент за поддръжка. Онлайн поддръжка Беше ли полезна тази информация? Да Не Благодарим ви! Имате ли още обратна връзка за Microsoft? Можете ли да ни помогнете да се подобрим? (Изпратете обратна връзка на Microsoft, за да можем да ви помогнем.) Доколко сте доволни от качеството на езика? Какво е повлияло на вашия потребителски опит? Разреши проблема ми Ясни инструкции Лесно за следване Без жаргон Картините помогнаха Качеството на превода Не съвпадна с екрана ми Неправилни инструкции Твърде технически Няма достатъчно информация Няма достатъчно картини Качеството на превода Имате ли допълнителна обратна връзка? (опционално) Подаване на обратна връзка Като натиснете „Подаване“, вашата обратна връзка ще се използва за подобряване на продуктите и услугите на Microsoft. Вашият ИТ администратор ще може да събира тези данни. Декларация за поверителност. Благодарим ви за обратната връзка! × Какво е новото Copilot за организации Copilot за лична употреба Microsoft 365 Приложения за Windows 11 Microsoft Store Профил на акаунт Център за изтегляния Връщания Проследяване на поръчка Рециклиране Commercial Warranties Образование Microsoft Education Устройства за образование Microsoft Teams за образование Microsoft 365 Education Office Education Обучение и развитие на преподаватели Оферти за учащи и родители Azure за учащи Бизнес Microsoft Security Azure Dynamics 365 Microsoft 365 Реклами на Microsoft Microsoft 365 Copilot Microsoft Teams Разработчици и ИТ Разработчик на Microsoft Microsoft Learn Поддръжка за marketplace AI приложения Техническа общност на Microsoft Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Компания Кариери Поверителност в Microsoft Инвеститори устойчивост Български (България) Вашите избори за поверителност Икона за отписване Вашите избори за поверителност Вашите избори за поверителност Икона за отписване Вашите избори за поверителност Поверителност на здравето на потребителите Връзка с Microsoft Поверителност Управление на бисквитките Условия за използване Търговски марки За нашите реклами EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
https://www.visma.com/voiceofvisma/episode-7-oystein-moan | Ep 07: The untold stories of Visma with Øystein Moan Who we are About us Connected by software – driven by people Become a Visma company Join our family of thriving SaaS companies Technology and AI at Visma Innovation with customer value at its heart Our sponsorship Team Visma | Lease a Bike Sustainability A better impact through software Contact us Find the right contact information What we offer Cloud software We create brilliant ways to work For medium businesses Lead your business with clarity For small businesses Start, run and grow with ease For public sector Empower efficient societies For accounting offices Build your dream accounting office For partners Help us keep customers ahead For investors For investors Latest results, news and strategy Financials Key figures, quarterly and annual results Events Financial calendar Governance Policies, management, board and owners Careers Careers at Visma Join the business software revolution Locations Find your nearest office Open positions Turn your passion into a career Resources News For small businesses Cloud accounting software built for small businesses Who we are About us Technology and AI at Visma Sustainability Become a Visma company Our sponsorship What we offer Cloud software For small businesses For accounting offices For enterprises Public sector For partners For investors Overview Financials Governance News and press Events Careers Careers at Visma Open positions Hubs Resources Blog Visma Developer Trust Centre News Press releases Team Visma | Lease a Bike Podcast Ep 07: The untold stories of Visma with Øystein Moan Voice of Visma June 26, 2024 Spotify Created with Sketch. YouTube Apple Podcasts Amazon Music <iframe style="border-radius:12px" src="https://open.spotify.com/embed/episode/3OzEteYr6FHxCnTJQRTTPh?utm_source=generator" width="100%" height="352" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe> About the episode What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Today, Øystein, our former CEO and current Executive Chairman, joins Johan to reflect on the pivotal moments, the mistakes-turned-lessons, and the surprising stories that have shaped Visma into what it is today. Share More from Voice of Visma We're sitting down with leaders and colleagues from around Visma to share their stories, industry knowledge, and valuable career lessons. With the Voice of Visma podcast, we’re bringing our people and culture closer to you. Get to know the podcast Ep 22: Building, learning, and accelerating growth in the SaaS world with Maxin Schneider Entrepreneurial leadership often grows through experience, and Maxin Schneider has seen that up close. Read more Ep 21: How DEI fuels business success with Iveta Bukane Why DEI isn't just a moral imperative—it’s a business necessity. Read more Ep 20: Driving tangible sustainability outcomes with Freja Landewall Discover how ESG goes far beyond the environment, encompassing people, governance, and the long-term resilience of business. Read more Ep 19: Future-proofing public services in Sweden with Marie Ceder Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. Read more Ep 18: Making inclusion part of our everyday work with Ida Algotsson What does inclusion truly mean at Visma – not just as values, but as everyday actions? Read more Ep 17: Sustainability at the heart of business with Robin Åkerberg Honouring our responsibility goes well beyond the numbers – it starts with a shared purpose and values. Read more Ep 16: Innovation for the public good with Kasper Lyhr Serving the public sector goes way beyond software – it’s about shaping the future of society as a whole. Read more Ep 15: Leading with transparency and vulnerability with Ellen Sano What does it mean to be a “firestarter” in business? Read more Ep 14: Women, innovation, and the future of Visma with Merete Hverven Our CEO, Merete, knows that great leadership takes more than just hard work – it takes vision. Read more Ep 13: Building partnerships beyond software with Daniel Ognøy Kaspersen What does it look like when an accounting software company delivers more than just great software? Read more Ep 12: AI in the accounting sphere with Joris Joppe Artificial intelligence is changing industries across the board, and accounting is no exception. But in such a highly specialised field, what does change actually look like? Read more Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment. Read more Ep 10: When brave choices can save a company with Charlotte von Sydow What’s it like stepping in as the Managing Director for a company in decline? Read more Ep 09: Revolutionising tax tech in Italy with Enrico Mattiazzi and Vito Lomele Take one look at their product, their customer reviews, or their workplace awards, and it’s clear why Fiscozen leads Italy’s tax tech scene. Read more Ep 08: Navigating the waters of entrepreneurship with Steffen Torp When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. Read more Ep 07: The untold stories of Visma with Øystein Moan What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Read more Ep 06: Measure what matters: Employee engagement with Vibeke Müller Research shows that having engaged, happy employees is so important for building a great company culture and performing better financially. Read more Ep 05: Our Team Visma | Lease a Bike sponsorship with Anne-Grethe Thomle Karlsen It’s one thing to sponsor the world’s best cycling team; it’s a whole other thing to provide software and expertise that helps them do what they do best. Read more Ep 04: “How do you make people care about security?” with Joakim Tauren With over 700 applications across the Visma Group (and counting!), cybersecurity is make-or-break for us. Read more Ep 03: The human side of enterprise with Yvette Hoogewerf As a software company, our products are central to our business… but that’s only one part of the equation. Read more Ep 02: From Management Trainee to CFO with Stian Grindheim How does someone work their way up from Management Trainee to CFO by the age of 30? And balance fatherhood alongside it all? Read more Ep 01: An optimistic look at the future of AI with Jacob Nyman We’re all-too familiar with the fears surrounding artificial intelligence. So today, Jacob and Johan are flipping the script. Read more (Trailer) Introducing: Voice of Visma These are the stories that shape us... and the reason Visma is unlike anywhere else. Read more Visma Software International AS Organisation number: 980858073 MVA (Foretaksregisteret/The Register of Business Enterprises) Main office Karenslyst allé 56 0277 Oslo Norway Postal address PO box 733, Skøyen 0214 Oslo Norway visma@visma.com Visma on LinkedIn Who we are About us Technology at Visma Sustainability Become a Visma company Our sponsorship Contact us What we offer For small businesses For accounting offices For medium businesses For public sector For partners e-invoicing Digital signature For investors Overview Financials Governance Events Careers Careers at Visma Open positions Hubs Resources Blog Trust Centre Community News Press ©️ 2026 Visma Privacy policy Cookie policy Whistleblowing Cookies settings Transparency Act Change country | 2026-01-13T09:30:35 |
https://support.microsoft.com/lv-lv/microsoft-edge/microsoft-edge-p%C4%81rl%C5%ABko%C5%A1anas-dati-un-personas-datu-aizsardz%C4%ABba-bb8174ba-9d73-dcf2-9b4a-c582b4e640dd | Microsoft Edge, pārlūkošanas dati un personas datu aizsardzība - Microsoft atbalsts Pāriet uz galveno saturu Microsoft Atbalsts Atbalsts Atbalsts Sākums Microsoft 365 Office Produkti Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows vēl ... Ierīces Surface Datora piederumi Xbox Datorspēles HoloLens Surface Hub Aparatūras garantijas Konts & norēķini Konts Microsoft Store un norēķini Resursi Jaunumi Kopienas forumi Microsoft 365 administratori Mazo uzņēmumu portāls Izstrādātājs Izglītība Ziņot par atbalsta krāpšanu Produkta drošība Vairāk Iegādāties Microsoft 365 Viss Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Atbalsts Programmatūra Programmatūra Windows programmas AI OneDrive Outlook OneNote Microsoft Teams Datoriem un Ierīces Datoriem un Ierīces Accessories Izklaide Izklaide Personālā datora spēles Uzņēmumiem Uzņēmumiem Microsoft drošība Azure Dynamics 365 Microsoft 365 darbam Microsoft Industry Microsoft Power Platform Windows 365 Izstrāde un IT Izstrāde un IT Microsoft izstrādātājs Microsoft Learn Atbalsts mākslīgā intelekta tirgus programmām Microsoft tehniskā kopiena Microsoft Marketplace Visual Studio Marketplace Rewards Citi Citi Bezmaksas lejupielādes un drošība Izglītība Skatīt vietnes karti Meklēt Meklēt palīdzību Nav rezultātu Atcelt Pierakstīties Pierakstīties, izmantojot Microsoft Pierakstīties vai izveidot kontu Sveicināti! Atlasīt citu kontu. Jums ir vairāki konti Izvēlieties kontu, ar kuru vēlaties pierakstīties. Microsoft Edge, pārlūkošanas dati un personas datu aizsardzība Attiecas uz Privacy Microsoft Edge Windows 10 Windows 11 Microsoft Edge palīdz pārlūkot, meklēt, iepirkties tiešsaistē un veikt citas darbības. Tāpat kā visas modernās pārlūkprogrammas Microsoft Edge ļauj apkopot un saglabāt konkrētus datus jūsu ierīcē, piemēram, sīkfailus, un ļauj jums nosūtīt informāciju mums, piemēram, pārlūkošanas vēsturi, lai padarītu pieredzi pēc iespējas bagātīgāku, ātrāku un personiskāku. Kad apkopojam datus, mēs vēlamies pārliecināties, vai tā ir jums pareizā izvēle. Daži lietotāji uztraucas par to, ka tiek apkopota viņu pārlūkošanas vēsture. Tāpēc mēs jums sniedzam informāciju par to, kādi dati tiek glabāti jūsu ierīcē vai kurus apkopojam mēs. Mēs piedāvājam jums izvēles pārvaldīt, kādi dati tiek apkopoti. Lai iegūtu papildinformāciju par konfidencialitāti programmā Microsoft Edge, iesakām pārskatīt mūsu paziņojumu par konfidencialitāti . Kādi dati tiek apkopoti vai saglabāti un kādēļ Microsoft izmanto diagnostikas datus, lai uzlabotu mūsu produktus un pakalpojumus. Šos datus mēs izmantojam, lai labāk saprastu mūsu produktu veiktspēju un to, kādi uzlabojumi būtu jāveic. Microsoft Edge apkopo nepieciešamo diagnostikas datu kopu, lai microsoft Edge uzturētu drošu, atjauninātas un darbojas tā, kā paredzēts. Microsoft izmanto un ievēro minimālas informācijas vākšanas praksi. Mēs cenšamies apkopot tikai nepieciešamo informāciju un saglabāt to tikai tik ilgi, kamēr nepieciešams sniegt pakalpojumu vai veikt analīzi. Turklāt varat kontrolēt, vai ar jūsu ierīci saistītie neobligātie diagnostikas dati tiek koplietoti ar Microsoft, lai novērstu produktu problēmas un palīdzētu uzlabot Microsoft produktus un pakalpojumus. Izmantojot līdzekļus un pakalpojumus pārlūkprogrammā Microsoft Edge, diagnostikas dati par to, kā jūs izmantojat šos līdzekļus, tiek nosūtīti korporācijai Microsoft. Pārlūkprogramma Microsoft Edge saglabā jūsu pārlūkošanas vēsturi — informāciju par vietnēm, ko apmeklējat — jūsu ierīcē. Atkarībā no jūsu iestatījumiem šī pārlūkošanas vēsture tiek nosūtīta korporācijai Microsoft. Tas palīdz mums atrast un novērst problēmas un uzlabot mūsu produktu un pakalpojumu lietošanas pieredzi visiem lietotājiem. Lai pārvaldītu neobligāto diagnostikas datu kolekciju pārlūkprogrammā, atlasiet Iestatījumi un citas iespējas > Iestatījumi > Konfidencialitāte , meklēšana un pakalpojumi > Konfidencialitāte un ieslēgšana vai izslēgšana Neobligāto diagnostikas datu sūtīšana, lai uzlabotu Microsoft produktus . Tas attiecas arī uz datiem no jaunas funkcionalitātes testēšanas. Lai pabeigtu izmaiņu veikšanu šajā iestatījumā, restartējiet pārlūkprogrammu Microsoft Edge. Ieslēdzot šo iestatījumu, varat šos neobligātos diagnostikas datus kopīgot ar Microsoft no citām lietojumprogrammām, izmantojot Microsoft Edge, piemēram video straumēšanas programmu, kas vieso Microsoft Edge tīmekļa platformu, lai straumētu video. Microsoft Edge tīmekļa platforma nosūtīs korporācijai Microsoft informāciju par to, kā izmantojat tīmekļa platformu un vietnes, ko apmeklējat lietojumprogrammā. Šo datu kolekciju nosaka jūsu papildu diagnostikas datu iestatījums konfidencialitātes, meklēšanas un pakalpojumu iestatījumos programmā Microsoft Edge. Operētājsistēmā Windows 10 šos iestatījumus nosaka jūsu Windows diagnostikas datu iestatījums. Lai mainītu diagnostikas datu iestatījumu, atlasiet Sākums > Iestatījumi > konfidencialitātes > diagnostika & atsauksmes . No 2024 . gada 6. marta Microsoft Edge diagnostikas dati tiek apkopoti atsevišķi no Windows diagnostikas datiem Windows 10 (22H2 un jaunākas versijas) un Windows 11 (versija 23H2 un jaunākās) ierīcēs Eiropas Ekonomikas zonā. Šajās Windows versijās un visās citās platformās varat mainīt iestatījumus programmā Microsoft Edge , atlasot Iestatījumi un citas iespējas > Iestatījumi > konfidencialitāte, meklēšana un pakalpojumi . Dažos gadījumos diagnostikas datu iestatījumus, iespējams, pārvalda jūsu organizācija. Kad kaut ko meklējat, Microsoft Edge var sniegt ieteikumus par to, ko meklējat. Lai ieslēgtu šo līdzekli, atlasiet Iestatījumi un citas iespējas > Iestatījumi> Konfidencialitāte, meklēšana un pakalpojumi > Meklēšana un saistītie pakalpojumi >Adreses josla un meklējiet > Meklēšanas ieteikumi un filtri un ieslēdziet Rādīt meklēšanas un vietņu ieteikumus, izmantojot manas ierakstītās rakstzīmes . Sākot rakstīt, adreses joslā ievadītā informācija tiek nosūtīta jūsu noklusējuma meklēšanas nodrošinātājam, lai jums nodrošinātu tūlītējus meklēšanas un tīmekļa vietņu ieteikumus. Ja izmantojat InPrivate pārlūkošanas vai viesu režīmu, Microsoft Edge apkopo informāciju par to, kā izmantojat pārlūkprogrammu atkarībā no Windows diagnostikas datu iestatījuma vai Microsoft Edge konfidencialitātes iestatījumiem, taču automātiskie ieteikumi ir izslēgti un informācija par tīmekļa vietnēm, ko apmeklējat, netiek apkopota. Kad aizvērsit visus InPrivate logus, Microsoft Edge izdzēsīs jūsu pārlūkošanas vēsturi, sīkfailus un vietņu datus, kā arī paroles, adreses un veidlapu datus. Varat sākt jaunu InPrivate sesiju, atlasot Iestatījumi un citas iespējas datorā vai cilnēs mobilajā ierīcē. Pārlūkprogrammā Microsoft Edge ir iekļauti arī līdzekļi, kas nodrošina jūsu un jūsu satura drošību tiešsaistē. Windows Defender SmartScreen automātiski bloķē tīmekļa vietnes un satura lejupielādes, par kurām tiek ziņots, ka tās ir ļaunprātīgas. Windows Defender SmartScreen pārbauda, vai apmeklējamās tīmekļa lapas adrese nav iekļauta jūsu ierīcē saglabātajā to tīmekļa lapu adrešu sarakstā, kuras Microsoft uzskata par likumīgām. Adreses, kas nav jūsu ierīces sarakstā, un to failu adreses, kurus lejupielādējat, tiks nosūtītas korporācijai Microsoft un pārbaudītas, salīdzinot ar bieži atjaunināto tīmekļa lapu un lejupielāžu sarakstu, par kurām Microsoft ir ziņots, ka tās ir nedrošas vai aizdomīgas. Lai paātrinātu tādus garlaicīgus uzdevumus kā veidlapu aizpildīšana un paroļu ievadīšana, Microsoft Edge var saglabāt informāciju, lai palīdzētu. Ja izvēlaties šos līdzekļus izmantot, Microsoft Edge glabā informāciju jūsu ierīcē. Ja esat ieslēdzis sinhronizāciju veidlapu aizpildīšanai, piemēram, adresēm vai parolēm, šī informācija tiks nosūtīta Microsoft mākonim un saglabāta ar jūsu Microsoft kontu, lai to sinhronizētu visās jūsu microsoft Edge versijās, kuras esat pierakstījies. Šos datus varat pārvaldīt sadaļā Iestatījumi, kā arī > profilu > , > sinhronizācijā . Lai integrētu pārlūkošanas pieredzi ar citām darbībām, ko darāt savā ierīcē, Microsoft Edge kopīgo jūsu pārlūkošanas vēsturi ar Microsoft Windows, izmantojot tās indeksētāju. Šī informācija tiek glabāta lokāli ierīcē. Tajā ir iekļauti vietrāži URL, kategorija, kurā vietrādis URL var būt saistīts, piemēram, "visbiežāk apmeklētie", "nesen apmeklētie" vai "nesen slēgtie", kā arī relatīvais biežums vai recence katrā kategorijā. Tīmekļa vietnes, ko apmeklējat InPrivate režīmā, netiks koplietotas. Pēc tam šī informācija ir pieejama citām lietojumprogrammām ierīcē, piemēram, sākuma izvēlnei vai uzdevumjoslai. Šo līdzekli varat pārvaldīt, atlasot Iestatījumi un vairāk > Iestatījumi> Profili , kā arī ieslēdzot vai izslēdzot opciju Kopīgot pārlūkošanas datus ar citiem Windows līdzekļiem . Ja šī funkcija ir izslēgta, visi iepriekš kopīgotie dati tiks izdzēsti. Lai aizsargātu daļu video un mūzikas saturu no kopēšanas, dažas straumēšanas tīmekļa vietnes saglabā digitālā satura tiesību pārvaldības (DRM) datus jūsu ierīcē, tostarp unikālu identifikatoru (ID) un multivides licences. Ja apmeklējat kādu no šīm tīmekļa vietnēm, tā izgūst DRM informāciju, lai pārliecinātos, vai jums ir atļauja, izmantot saturu. Microsoft Edge saglabā arī sīkfailus — nelielus failus, kas tiek ievietoti ierīcē, kad pārlūkojat tīmekli. Daudzas tīmekļa vietnes izmanto sīkfailus, lai saglabātu informāciju par jūsu preferencēm un iestatījumiem, piemēram, saglabā vienumus jūsu iepirkšanās kartē, lai tie nebūtu jāpievieno ikreiz, kad veicat apmeklējumu. Dažas tīmekļa vietnes arī izmanto sīkfailus, lai apkopotu informāciju par jūsu tiešsaistē veiktajām darbībām, lai parādītu uz interesēm balstītas reklāmas. Microsoft Edge sniedz iespēju notīrīt sīkfailus un neļaut tīmekļa vietnēm turpmāk saglabāt sīkfailus. Microsoft Edge nosūtīs neizsekošanas pieprasījumus tīmekļa vietnēm, ja iestatījums Sūtīt neizsekošanas pieprasījumus ir ieslēgts. Šis iestatījums ir pieejams sadaļā Iestatījumi un citas iespējas > iestatījumi > Konfidencialitātes, meklēšanas un pakalpojumu > konfidencialitātes > Nosūtīt "Neizsekot" pieprasījumus. Tomēr tīmekļa vietnes joprojām var izsekot jūsu darbības pat tad, ja tiek nosūtīts pieprasījums Neizsekot. Lai notīrītu datus, ko apkopo vai glabā Microsoft Edge, izpildiet tālākos norādījumus. Lai notīrītu ierīcē saglabāto pārlūkošanas informāciju, piemēram, saglabātās paroles vai sīkfailus, izpildiet tālākos norādījumus. Pārlūkā Microsoft Edge atlasiet Iestatījumi un citas > iestatījumi > Konfidencialitāte, meklēšana un pakalpojumi > Notīrīt pārlūkošanas datus . Atlasiet Izvēlēties, ko notīrīt blakus Notīrīt pārlūkošanas datus tūlīt. Sadaļā Laika diapazons izvēlieties laika intervālu. Atzīmējiet izvēles rūtiņu blakus katram datu tipam, kuru vēlaties notīrīt, un pēc tam atlasiet Notīrīt tūlīt . Ja vēlaties, varat atlasīt Izvēlēties, ko notīrīt ikreiz, kad aizverat pārlūkprogrammu, un izvēlēties, kurus datu tipus notīrīt. Uzziniet vairāk par to, kas tiek izdzēsts katram pārlūkprogrammas vēstures vienumam . Microsoft apkopotās pārlūkošanas vēstures notīrīšana Lai skatītu ar kontu saistīto pārlūkošanas vēsturi, pierakstieties savā kontā jebkurā laikā account.microsoft.com . Turklāt jums ir iespēja notīrīt savus pārlūkošanas datus, ko korporācija Microsoft ir apkopojusi, izmantojot Microsoft konfidencialitātes informācijas paneli . Lai dzēstu pārlūkošanas vēsturi un citus diagnostikas datus, kas saistīti ar jūsu Windows 10 ierīci, atlasiet Sākums > Iestatījumi > Konfidencialitāte > Diagnostika un & atsauksmes un pēc tam sadaļā Dzēst diagnostikas datus atlasiet Dzēst . Lai notīrītu pārlūkošanas vēsturi, kas lokālajā ierīcē tiek koplietota ar citiem Microsoft līdzekļiem: Programmā Microsoft Edge atlasiet Iestatījumi un citas iespējas > profilu > iestatījumi . Atlasiet Kopīgot pārlūkošanas datus ar citiem Windows līdzekļiem . Izslēdziet šo iestatījumu . Kā pārvaldīt konfidencialitātes iestatījumus pārlūkprogrammā Microsoft Edge Lai pārskatītu un pielāgotu savus konfidencialitātes iestatījumus, atlasiet Iestatījumi un citas > Iestatījumi > Konfidencialitāte, meklēšana un pakalpojumi . > konfidencialitāte. Lai iegūtu papildinformāciju par konfidencialitāti programmā Microsoft Edge, izlasiet Microsoft Edge privātuma balto tāfeli . ABONĒT RSS PLŪSMAS Nepieciešama papildu palīdzība? Vēlaties vairāk opciju? Atklāt Kopiena Sazināties ar mums Izpētiet abonementa priekšrocības, pārlūkojiet apmācības kursus, uzziniet, kā aizsargāt ierīci un veikt citas darbības. Microsoft 365 abonementa priekšrocības Microsoft 365 apmācība Microsoft drošība Pieejamības centrs Kopienas palīdz uzdot jautājumus un atbildēt uz tiem, sniegt atsauksmes, kā arī saņemt informāciju no ekspertiem ar bagātīgām zināšanām. Jautājiet Microsoft kopienai Microsoft Tech kopiena Programmas Windows Insider dalībnieki Programmas Microsoft 365 Insider dalībnieki Atrodiet risinājumus visbiežāk sastopamām problēmām vai saņemiet palīdzību no atbalsta dienesta aģenta. Tiešsaistes atbalsts Vai šī informācija bija noderīga? Jā Nē Paldies! Vai jums ir vēl kādas atsauksmes par Microsoft? Vai varat palīdzēt mums veikt uzlabojumus? (Nosūtiet atsauksmes korporācijai Microsoft, lai mēs varētu palīdzēt.) Cik lielā mērā esat apmierināts ar valodas kvalitāti? Kas ietekmēja jūsu pieredzi? Atrisināja manu problēmu Skaidri norādījumi Viegli sekot Bez žargona Attēli palīdzēja Tulkojuma kvalitāte Neatbilst ekrānam Nepareizi norādījumi Pārāk tehnisks Nepietiek informācijas Nepietiek attēlu Tulkojuma kvalitāte Vai vēlaties sniegt papildu atsauksmes? (Neobligāti) Iesniegt atsauksmes Nospiežot Iesniegt, jūsu atsauksmes tiks izmantotas Microsoft produktu un pakalpojumu uzlabošanai. Jūsu IT administrators varēs vākt šos datus. Paziņojums par konfidencialitāti. Paldies par jūsu atsauksmēm! × Jaunumi Copilot organizācijām Copilot individuālai lietošanai Microsoft 365 Windows 11 lietotnes Microsoft Store Konta profils Lejupielādes centrs Atgrieztie vienumi Pasūtījumu izsekošana Otrreizējā pārstrāde Commercial Warranties Izglītība Microsoft Education Ierīces izglītībai Microsoft Teams izglītības iestādēm Microsoft 365 Education Office Education Pedagogu apmācība un attīstība Piedāvājumi skolēniem un vecākiem Azure skolēniem Uzņēmējdarbība Microsoft drošība Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams Izstrāde un IT Microsoft izstrādātājs Microsoft Learn Atbalsts mākslīgā intelekta tirgus programmām Microsoft tehniskā kopiena Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Uzņēmējsabiedrība Karjera Microsoft privātums Investori Ilgtspējība Latviešu (Latvija) Jūsu konfidencialitātes izvēles iespējas — atteikšanās ikona Jūsu konfidencialitātes izvēles iespējas Jūsu konfidencialitātes izvēles iespējas — atteikšanās ikona Jūsu konfidencialitātes izvēles iespējas Patērētāju veselības konfidencialitāte Sazināties ar Microsoft Konfidencialitāte Pārvaldīt sīkfailus Izmantošanas noteikumi Prečzīmes Par mūsu reklāmām EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.instagram.com%2F&h=AT2o3j0htiRrhWcq-f5-lF3PXfFtpsB2CE711w5eihcC-QfzXige1rj3PIY1kBZIisiFfNiolIriYFaJo9slqDdSswE52nn2bRTx50aOsnsa70IxC-49eMby2nvEEgcNLk7EajGTeH0L11vj | Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026 | 2026-01-13T09:30:35 |
https://l.facebook.com/l.php?u=https%3A%2F%2Fwww.instagram.com%2F&h=AT0ARKrJOqJnjQWRq_nGLiJOxqXRYO7uV3KRlWNS-oS-sUckc5N8JnISK66h9WnqzCQWurwMSsUwn-KQYop_FPfhFQNyvpN_dpxeXaS8YEtsHP5Do_BZYLgWSGOk6krcMGbQG8PVhq0rf2Pj | Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026 | 2026-01-13T09:30:35 |
https://www.visma.com/voiceofvisma/ep-19-future-proofing-public-services-in-sweden-with-marie-ceder | Ep 19: Future-proofing public services in Sweden with Marie Ceder Who we are About us Connected by software – driven by people Become a Visma company Join our family of thriving SaaS companies Technology and AI at Visma Innovation with customer value at its heart Our sponsorship Team Visma | Lease a Bike Sustainability A better impact through software Contact us Find the right contact information What we offer Cloud software We create brilliant ways to work For medium businesses Lead your business with clarity For small businesses Start, run and grow with ease For public sector Empower efficient societies For accounting offices Build your dream accounting office For partners Help us keep customers ahead For investors For investors Latest results, news and strategy Financials Key figures, quarterly and annual results Events Financial calendar Governance Policies, management, board and owners Careers Careers at Visma Join the business software revolution Locations Find your nearest office Open positions Turn your passion into a career Resources News For small businesses Cloud accounting software built for small businesses Who we are About us Technology and AI at Visma Sustainability Become a Visma company Our sponsorship What we offer Cloud software For small businesses For accounting offices For enterprises Public sector For partners For investors Overview Financials Governance News and press Events Careers Careers at Visma Open positions Hubs Resources Blog Visma Developer Trust Centre News Press releases Team Visma | Lease a Bike Podcast Ep 19: Future-proofing public services in Sweden with Marie Ceder Voice of Visma July 2, 2025 Spotify Created with Sketch. YouTube Apple Podcasts Amazon Music <iframe style="border-radius:12px" src="https://www.youtube.com/embed/TRH5nF7PkQI?si=38BYsDM-6DSNQKtf" width="100%" height="500" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe> About the episode Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. In this episode, Marie Ceder, Managing Director of Publitech, joins host Linda Emmery to talk about how secure, mission-critical SaaS solutions are changing the game. And what it means to be a trusted tech partner in a rapidly evolving landscape. Share More from Voice of Visma We're sitting down with leaders and colleagues from around Visma to share their stories, industry knowledge, and valuable career lessons. With the Voice of Visma podcast, we’re bringing our people and culture closer to you. Get to know the podcast Ep 22: Building, learning, and accelerating growth in the SaaS world with Maxin Schneider Entrepreneurial leadership often grows through experience, and Maxin Schneider has seen that up close. Read more Ep 21: How DEI fuels business success with Iveta Bukane Why DEI isn't just a moral imperative—it’s a business necessity. Read more Ep 20: Driving tangible sustainability outcomes with Freja Landewall Discover how ESG goes far beyond the environment, encompassing people, governance, and the long-term resilience of business. Read more Ep 19: Future-proofing public services in Sweden with Marie Ceder Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. Read more Ep 18: Making inclusion part of our everyday work with Ida Algotsson What does inclusion truly mean at Visma – not just as values, but as everyday actions? Read more Ep 17: Sustainability at the heart of business with Robin Åkerberg Honouring our responsibility goes well beyond the numbers – it starts with a shared purpose and values. Read more Ep 16: Innovation for the public good with Kasper Lyhr Serving the public sector goes way beyond software – it’s about shaping the future of society as a whole. Read more Ep 15: Leading with transparency and vulnerability with Ellen Sano What does it mean to be a “firestarter” in business? Read more Ep 14: Women, innovation, and the future of Visma with Merete Hverven Our CEO, Merete, knows that great leadership takes more than just hard work – it takes vision. Read more Ep 13: Building partnerships beyond software with Daniel Ognøy Kaspersen What does it look like when an accounting software company delivers more than just great software? Read more Ep 12: AI in the accounting sphere with Joris Joppe Artificial intelligence is changing industries across the board, and accounting is no exception. But in such a highly specialised field, what does change actually look like? Read more Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment. Read more Ep 10: When brave choices can save a company with Charlotte von Sydow What’s it like stepping in as the Managing Director for a company in decline? Read more Ep 09: Revolutionising tax tech in Italy with Enrico Mattiazzi and Vito Lomele Take one look at their product, their customer reviews, or their workplace awards, and it’s clear why Fiscozen leads Italy’s tax tech scene. Read more Ep 08: Navigating the waters of entrepreneurship with Steffen Torp When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. Read more Ep 07: The untold stories of Visma with Øystein Moan What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Read more Ep 06: Measure what matters: Employee engagement with Vibeke Müller Research shows that having engaged, happy employees is so important for building a great company culture and performing better financially. Read more Ep 05: Our Team Visma | Lease a Bike sponsorship with Anne-Grethe Thomle Karlsen It’s one thing to sponsor the world’s best cycling team; it’s a whole other thing to provide software and expertise that helps them do what they do best. Read more Ep 04: “How do you make people care about security?” with Joakim Tauren With over 700 applications across the Visma Group (and counting!), cybersecurity is make-or-break for us. Read more Ep 03: The human side of enterprise with Yvette Hoogewerf As a software company, our products are central to our business… but that’s only one part of the equation. Read more Ep 02: From Management Trainee to CFO with Stian Grindheim How does someone work their way up from Management Trainee to CFO by the age of 30? And balance fatherhood alongside it all? Read more Ep 01: An optimistic look at the future of AI with Jacob Nyman We’re all-too familiar with the fears surrounding artificial intelligence. So today, Jacob and Johan are flipping the script. Read more (Trailer) Introducing: Voice of Visma These are the stories that shape us... and the reason Visma is unlike anywhere else. Read more Visma Software International AS Organisation number: 980858073 MVA (Foretaksregisteret/The Register of Business Enterprises) Main office Karenslyst allé 56 0277 Oslo Norway Postal address PO box 733, Skøyen 0214 Oslo Norway visma@visma.com Visma on LinkedIn Who we are About us Technology at Visma Sustainability Become a Visma company Our sponsorship Contact us What we offer For small businesses For accounting offices For medium businesses For public sector For partners e-invoicing Digital signature For investors Overview Financials Governance Events Careers Careers at Visma Open positions Hubs Resources Blog Trust Centre Community News Press ©️ 2026 Visma Privacy policy Cookie policy Whistleblowing Cookies settings Transparency Act Change country | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | Get started with CloudFront - Amazon CloudFront Get started with CloudFront - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Get started with CloudFront The topics in this section show you how to get started delivering your content with Amazon CloudFront. The Set up your AWS account topic describes prerequisites for the following tutorials, such as creating an AWS account and creating a user with administrative access. The basic distribution tutorial shows you how to set up origin access control (OAC) to send authenticated requests to an Amazon S3 origin. The secure static website tutorial shows you how to create a secure static website for your domain name using OAC with an Amazon S3 origin. The tutorial uses an Amazon CloudFront (CloudFront) template for configuration and deployment. Topics Set up your AWS account Get started with a CloudFront standard distribution Get started with a standard distribution (AWS CLI) Get started with a secure static website Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Working with AWS SDKs Set up your AWS account Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/ko_kr/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | CloudFront 시작하기 - Amazon CloudFront CloudFront 시작하기 - Amazon CloudFront 설명서 Amazon CloudFront 개발자 가이드 CloudFront 시작하기 이 섹션의 주제들은 Amazon CloudFront에서 콘텐츠 전송을 시작하는 방법을 보여 줍니다. AWS 계정 설정 주제에서는 다음 튜토리얼의 사전 요구 사항(예: AWS 계정 생성 및 관리 액세스 권한이 있는 사용자 생성)을 설명합니다. 기본 배포 튜토리얼에서는 Amazon S3 오리진에 인증된 요청을 전송하기 위해 오리진 액세스 제어(OAC)를 설정하는 방법을 보여 줍니다. 보안 정적 웹 사이트 튜토리얼에서는 Amazon S3 오리진에서 OAC를 사용하여 도메인 이름에 대한 안전한 정적 웹 사이트를 생성하는 방법을 보여 줍니다. 이 튜토리얼에서는 구성 및 배포에 Amazon CloudFront(CloudFront) 템플릿을 사용합니다. 주제 AWS 계정 설정 CloudFront 표준 배포 시작 표준 배포 시작(AWS CLI) 안전한 정적 웹 사이트 시작하기 javascript가 브라우저에서 비활성화되거나 사용이 불가합니다. AWS 설명서를 사용하려면 Javascript가 활성화되어야 합니다. 지침을 보려면 브라우저의 도움말 페이지를 참조하십시오. 문서 규칙 AWS SDK 작업 AWS 계정 설정 이 페이지의 내용이 도움이 되었습니까? - 예 칭찬해 주셔서 감사합니다! 잠깐 시간을 내어 좋았던 부분을 알려 주시면 더 열심히 만들어 보겠습니다. 이 페이지의 내용이 도움이 되었습니까? - 아니요 이 페이지에 작업이 필요하다는 점을 알려 주셔서 감사합니다. 실망시켜 드려 죄송합니다. 잠깐 시간을 내어 설명서를 향상시킬 수 있는 방법에 대해 말씀해 주십시오. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/de_de/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | Fangen Sie an mit CloudFront - Amazon CloudFront Fangen Sie an mit CloudFront - Amazon CloudFront Dokumentation Amazon CloudFront Entwicklerhandbuch Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Fangen Sie an mit CloudFront Die Themen in diesem Abschnitt zeigen Ihnen, wie Sie mit der Bereitstellung Ihrer Inhalte über Amazon beginnen können CloudFront. In Richten Sie Ihre ein AWS-Konto diesem Thema werden die Voraussetzungen für die folgenden Tutorials beschrieben, z. B. das Erstellen eines Benutzers AWS-Konto und das Erstellen eines Benutzers mit Administratorzugriff. Das Tutorial zur grundlegenden Distribution zeigt Ihnen, wie Sie die Ursprungszugriffssteuerung (OAC) einrichten, um authentifizierte Anforderungen an einen Amazon-S3-Ursprung zu senden. Im Tutorial zur sicheren statischen Website erfahren Sie, wie Sie mithilfe von OAC mit einem Amazon-S3-Ursprung eine sichere statische Website für Ihren Domainnamen erstellen. Das Tutorial verwendet eine Amazon CloudFront (CloudFront) -Vorlage für Konfiguration und Bereitstellung. Themen Richten Sie Ihre ein AWS-Konto Beginnen Sie mit einer CloudFront Standarddistribution Erste Schritte mit einer Standarddistribution (AWS CLI) Erste Schritte mit einer sicheren statischen Website JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Arbeiten mit AWS-SDKs Richten Sie Ihre ein AWS-Konto Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/sdk-general-information-section.html | Using CloudFront with an AWS SDK - Amazon CloudFront Using CloudFront with an AWS SDK - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Using CloudFront with an AWS SDK AWS software development kits (SDKs) are available for many popular programming languages. Each SDK provides an API, code examples, and documentation that make it easier for developers to build applications in their preferred language. SDK documentation Code examples AWS SDK for C++ AWS SDK for C++ code examples AWS CLI AWS CLI code examples AWS SDK for Go AWS SDK for Go code examples AWS SDK for Java AWS SDK for Java code examples AWS SDK for JavaScript AWS SDK for JavaScript code examples AWS SDK for Kotlin AWS SDK for Kotlin code examples AWS SDK for .NET AWS SDK for .NET code examples AWS SDK for PHP AWS SDK for PHP code examples AWS Tools for PowerShell AWS Tools for PowerShell code examples AWS SDK for Python (Boto3) AWS SDK for Python (Boto3) code examples AWS SDK for Ruby AWS SDK for Ruby code examples AWS SDK for Rust AWS SDK for Rust code examples AWS SDK for SAP ABAP AWS SDK for SAP ABAP code examples AWS SDK for Swift AWS SDK for Swift code examples Example availability Can't find what you need? Request a code example by using the Provide feedback link at the bottom of this page. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions CloudFront edge servers Get started Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://wiki.php.net/vcs/gitworkflow?do=backlink | PHP: vcs:gitworkflow - Backlinks Login Register You are here: start › vcs › gitworkflow vcs:gitworkflow Backlinks This is a list of pages that seem to link back to the current page. pear:git start systems:vcs vcs:gitfaq vcs/gitworkflow.txt · Last modified: 2025/08/06 10:03 by derick Page Tools Show page Old revisions Backlinks Back to top Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy | 2026-01-13T09:30:35 |
https://wiki.php.net/vcs/gitworkflow?do=login#dokuwiki__top | PHP: Log In Login Register You are here: start › vcs › gitworkflow vcs:gitworkflow Login You are currently not logged in! Enter your authentication credentials below to log in. You need to have cookies enabled to log in. Log In Username Password Remember me Log In You don't have an account yet? Just get one: Register Forgotten your password? Get a new one: Set new password vcs/gitworkflow.txt · Last modified: 2025/08/06 10:03 by derick Page Tools Show page Old revisions Backlinks Back to top Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/get-started-cli-tutorial.html | Get started with a standard distribution (AWS CLI) - Amazon CloudFront Get started with a standard distribution (AWS CLI) - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Prerequisites Create an Amazon S3 bucket Upload the content to the bucket Create an Origin Access Control (OAC) Create a standard distribution Update your S3 bucket policy Confirm distribution deployment Access your content through CloudFront Clean up Get started with a standard distribution (AWS CLI) The procedures in this section show you how to use the AWS CLI with CloudFront to set up a basic configuration that involves the following: Creating an Amazon S3 bucket to use as your distribution origin. Storing the original versions of your objects in the S3 bucket. Using origin access control (OAC) to send authenticated requests to your Amazon S3 origin. OAC sends requests through CloudFront to prevent viewers from accessing your S3 bucket directly. For more information about OAC, see Restrict access to an Amazon S3 origin . Using the CloudFront domain name in URLs for your objects (for example, https://d111111abcdef8.cloudfront.net/index.html ). Keeping your objects in CloudFront edge locations for the default duration of 24 hours (the minimum duration is 0 seconds). Most of these options are customizable. For information about how to customize your CloudFront distribution options, see Create a distribution . Prerequisites Before you begin, make sure that you’ve completed the steps in Set up your AWS account . Install the AWS CLI and configure it with your credentials. For more information, see Getting started with the AWS CLI in the AWS CLI User Guide . Create an Amazon S3 bucket An Amazon S3 bucket is a container for files (objects) or folders. CloudFront can distribute almost any type of file for you when an S3 bucket is the source. For example, CloudFront can distribute text, images, and videos. There is no maximum for the amount of data that you can store on Amazon S3. For this tutorial, you create an S3 bucket and upload an HTML file that you will use to create a basic webpage. aws s3 mb s3:// amzn-s3-demo-bucket / --region us-east-1 Replace amzn-s3-demo-bucket with a globally unique bucket name. For the AWS Region, we recommend choosing a Region that is geographically close to you. This reduces latency and costs, but choosing a different Region works, too. For example, you might do this to address regulatory requirements. Upload the content to the bucket For this tutorial, download and extract the sample content files for a basic "Hello World" webpage. # Create a temporary directory mkdir -p ~/cloudfront-demo # Download the sample Hello World files curl -o ~/cloudfront-demo/hello-world-html.zip https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/samples/hello-world-html.zip # Extract the zip file unzip ~/cloudfront-demo/hello-world-html.zip -d ~/cloudfront-demo/hello-world This creates a directory with an index.html file and a css folder. Upload these files to your S3 bucket. aws s3 cp ~/cloudfront-demo/hello-world/ s3:// amzn-s3-demo-bucket / --recursive Create an Origin Access Control (OAC) For this tutorial, you will create an origin access control (OAC). OAC helps you securely send authenticated requests to your Amazon S3 origin. For more information about OAC, see Restrict access to an Amazon S3 origin . aws cloudfront create-origin-access-control \ --origin-access-control-config Name=" oac-for-s3 ",SigningProtocol=sigv4,SigningBehavior=always,OriginAccessControlOriginType=s3 Save the OAC ID from the output as an environment variable. Replace the example value with your own OAC ID. You will use this in the next step. OAC_ID=" E1ABCD2EFGHIJ " Create a standard distribution Create a distribution configuration file named distribution-config.json . Replace the example bucket name with your bucket name for the Id , DomainName , and TargetOriginId values. cat > distribution-config.json << EOF { "CallerReference": "cli-example-$(date +%s)", "Origins": { "Quantity": 1, "Items": [ { "Id": "S3- amzn-s3-demo-bucket ", "DomainName": " amzn-s3-demo-bucket .s3.amazonaws.com", "S3OriginConfig": { "OriginAccessIdentity": "" }, "OriginAccessControlId": "$OAC_ID" } ] }, "DefaultCacheBehavior": { "TargetOriginId": "S3- amzn-s3-demo-bucket ", "ViewerProtocolPolicy": "redirect-to-https", "AllowedMethods": { "Quantity": 2, "Items": ["GET", "HEAD"], "CachedMethods": { "Quantity": 2, "Items": ["GET", "HEAD"] } }, "DefaultTTL": 86400, "MinTTL": 0, "MaxTTL": 31536000, "Compress": true, "ForwardedValues": { "QueryString": false, "Cookies": { "Forward": "none" } } }, "Comment": "CloudFront distribution for S3 bucket", "Enabled": true } EOF Create the standard distribution. aws cloudfront create-distribution --distribution-config file://distribution-config.json Save the distribution ID and domain name from the output as environment variables. Replace the example values with your own. You'll use these later in this tutorial. DISTRIBUTION_ID=" EABCD1234XMPL " DOMAIN_NAME=" d111111abcdef8.cloudfront.net " Before using the distribution and S3 bucket from this tutorial in a production environment, make sure to configure it to meet your specific needs. For information about configuring access in a production environment, see Configure secure access and restrict access to content . Update your S3 bucket policy Update your S3 bucket policy to allow CloudFront to access the objects. Replace the example bucket name with your bucket name. # Get your AWS account ID ACCOUNT_ID=$(aws sts get-caller-identity --query 'Account' --output text) # Create the bucket policy cat > bucket-policy.json << EOF { "Version": "2012-10-17", "Statement": [ { "Sid": "AllowCloudFrontServicePrincipal", "Effect": "Allow", "Principal": { "Service": "cloudfront.amazonaws.com" }, "Action": "s3:GetObject", "Resource": "arn:aws:s3::: amzn-s3-demo-bucket /*", "Condition": { "StringEquals": { "AWS:SourceArn": "arn:aws:cloudfront::$ACCOUNT_ID:distribution/$DISTRIBUTION_ID" } } } ] } EOF # Apply the bucket policy aws s3api put-bucket-policy \ --bucket amzn-s3-demo-bucket \ --policy file://bucket-policy.json Confirm distribution deployment After you create your distribution, it will take some time to finish deploying. When the distribution status changes from InProgress to Deployed , proceed to the next step. aws cloudfront get-distribution --id $DISTRIBUTION_ID --query 'Distribution.Status' Alternatively, you can use the wait command to wait for distribution deployment. aws cloudfront wait distribution-deployed --id $DISTRIBUTION_ID Access your content through CloudFront To access your content through CloudFront, combine the domain name for your CloudFront distribution with the main page for your content. Replace the example CloudFront domain name with your own. https:// d111111abcdef8.cloudfront.net /index.html If you followed the previous steps and created the HTML file, you should see a webpage that says Hello world! . When you upload more content to this S3 bucket, you can access the content through CloudFront by combining the CloudFront distribution domain name with the path to the object in the S3 bucket. For example, if you upload a new file named new-page.html to the root of your S3 bucket, the URL looks like this: https://d111111abcdef8.cloudfront.net/new-page.html . Clean up If you created your distribution and S3 bucket only as a learning exercise, delete them so that you no longer accrue charges. Disable and delete the distribution first. To disable and delete a standard distribution (AWS CLI) First, disable the distribution. # Get the current configuration and ETag ETAG=$(aws cloudfront get-distribution-config --id $DISTRIBUTION_ID --query 'ETag' --output text) # Create a modified configuration with Enabled=false aws cloudfront get-distribution-config --id $DISTRIBUTION_ID | \ jq '.DistributionConfig.Enabled = false' > temp_disabled_config.json # Update the distribution to disable it aws cloudfront update-distribution \ --id $DISTRIBUTION_ID \ --distribution-config file://<(jq '.DistributionConfig' temp_disabled_config.json) \ --if-match $ETAG Wait for the distribution to be disabled. aws cloudfront wait distribution-deployed --id $DISTRIBUTION_ID Delete the distribution. # Get the current ETag ETAG=$(aws cloudfront get-distribution-config --id $DISTRIBUTION_ID --query 'ETag' --output text) # Delete the distribution aws cloudfront delete-distribution --id $DISTRIBUTION_ID --if-match $ETAG To delete an S3 bucket (AWS CLI) Delete the S3 bucket and its contents. Replace the example bucket name with your own. # Delete the bucket contents aws s3 rm s3:// amzn-s3-demo-bucket --recursive # Delete the bucket aws s3 rb s3:// amzn-s3-demo-bucket To clean up the local files created for this tutorial, run the following commands: # Clean up local files rm -f distribution-config.json bucket-policy.json temp_disabled_config.json rm -rf ~/cloudfront-demo Optionally, you can delete the OAC that you created for this tutorial. # Get the OAC ETag OAC_ETAG=$(aws cloudfront get-origin-access-control --id $OAC_ID --query 'ETag' --output text) # Delete the OAC aws cloudfront delete-origin-access-control --id $OAC_ID --if-match $OAC_ETAG Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Get started with a standard distribution Get started with a secure static website Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/de_de/AmazonS3/latest/userguide/bucketnamingrules.html | Benennungsregeln für Allzweck-Buckets - Amazon Simple Storage Service Benennungsregeln für Allzweck-Buckets - Amazon Simple Storage Service Dokumentation Amazon Simple Storage Service (S3) Benutzerhandbuch Regeln für die Benennung von Buckets für allgemeine Zwecke Beispiel für Namen von Buckets für allgemeine Zwecke Bewährte Methoden Erstellen eines Buckets, der eine GUID im Bucket-Namen verwendet Benennungsregeln für Allzweck-Buckets Wenn Sie einen Allzweck-Bucket erstellen, achten Sie auf die Länge, die gültigen Zeichen, die Formatierung und die Eindeutigkeit der Bucket-Namen. Die folgenden Abschnitte enthalten Informationen zur Benennung von Allzweck-Buckets, einschließlich Benennungsregeln, bewährter Methoden und eines Beispiels für die Erstellung eines Allzweck-Buckets mit einem Namen, der eine GUID (Globally Unique Identifier) enthält. Informationen über Objektschlüsselnamen finden Sie unter Erstellen von Objektschlüsselnamen . Um einen Allzweck-Bucket zu erstellen, siehe Erstellen eines Allzweck-Buckets . Themen Regeln für die Benennung von Buckets für allgemeine Zwecke Beispiel für Namen von Buckets für allgemeine Zwecke Bewährte Methoden Erstellen eines Buckets, der eine GUID im Bucket-Namen verwendet Regeln für die Benennung von Buckets für allgemeine Zwecke Die folgenden Regeln gelten für die Benennung von Buckets für allgemeine Zwecke. Bucket-Namen müssen zwischen (min.) 3 und (max.) 63 Zeichen lang sein. Bucket-Namen können nur aus Kleinbuchstaben, Zahlen und Bindestrichen ( . ) und Gedankenstrichen ( - ) bestehen. Bucket-Namen müssen mit einem Buchstaben oder einer Zahl beginnen und enden. Bucket-Namen dürfen keine aufeinander folgenden Punkte (..) enthalten. Bucket-Namen dürfen nicht als IP-Adresse formatiert sein (zum Beispiel 192.168.5.4 ). Der Bucket-Name darf nicht mit dem Präfix xn-- beginnen. Der Bucket-Name darf nicht mit dem Präfix sthree- beginnen. Der Bucket-Name darf nicht mit dem Präfix amzn-s3-demo- beginnen. Bucket-Namen dürfen nicht mit dem Suffix -s3alias enden. Dieses Suffix ist für Zugriffspunkt-Aliasnamen reserviert. Weitere Informationen finden Sie unter Zugriffspunkt-Aliasse . Bucket-Namen dürfen nicht mit dem Suffix --ol-s3 enden. Dieses Suffix ist für Objekt-Lambda-Zugriffspunkt-Aliasnamen reserviert. Weitere Informationen finden Sie unter So verwenden Sie einen Alias im Bucket-Stil für den Object Lambda Access Point Ihres S3-Buckets . Bucket-Namen dürfen nicht mit dem Suffix .mrap enden. Dieses Suffix ist für Namen Multi-Region Access Point reserviert. Weitere Informationen finden Sie unter Regeln zur Benennung von Amazon S3-Multi-Regions-Zugriffspunkten . Bucket-Namen dürfen nicht mit dem Suffix --x-s3 enden. Dieses Suffix ist für Verzeichnis-Buckets reserviert. Weitere Informationen finden Sie unter Regeln für die Benennung von Verzeichnis-Buckets . Bucket-Namen dürfen nicht mit dem Suffix --table-s3 enden. Dieses Suffix ist für S3 Tables Buckets reserviert. Weitere Informationen finden Sie unter Regeln für die Benennung von Tabellen-Buckets, Tabellen und Namespaces in Amazon S3 . Buckets, die mit Amazon S3 Transfer Acceleration verwendet werden, können keine Punkte ( . ) in ihren Namen haben. Weitere Informationen zu Transfer Acceleration finden Sie unter Konfigurieren schneller, sicherer Dateiübertragungen mit Amazon S3 Transfer Acceleration . Wichtig Bucket-Namen müssen überall in AWS-Konten innerhalb der gesamten AWS-Regionen auf einer Partition eindeutig sein. Eine Partition ist eine Gruppierung von Regionen. AWS verfügt derzeit über drei Partitionen: aws (kommerzielle Regionen), aws-cn (China-Regionen) und aws-us-gov (AWS GovCloud (US)-Regionen). Ein Bucket-Name kann nicht von einem anderen AWS-Konto in derselben Partition verwendet werden, bis der Bucket gelöscht wird. Beachten Sie nach dem Löschen eines Buckets, dass ein anderes AWS-Konto in derselben Partition denselben Bucket-Namen für einen neuen Bucket verwenden und daher möglicherweise Anfragen für den gelöschten Bucket erhalten kann. Wenn Sie weiterhin denselben Bucket-Namen verwenden wollen, sollten Sie den Bucket nicht löschen. Wir empfehlen, den Bucket zu leeren und beizubehalten und stattdessen alle Bucket-Anfragen nach Bedarf zu blockieren. Für Buckets, die nicht mehr aktiv verwendet werden, empfehlen wir, den Bucket mit allen Objekten zu leeren, um die Kosten zu minimieren und gleichzeitig den Bucket selbst beizubehalten. Wenn Sie einen Allzweck-Bucket erstellen, wählen Sie seinen Namen und die AWS-Region aus, in der er erstellt werden soll. Name oder Region einmal erstellter Allzweck-Buckets können nicht nachträglich geändert werden. Geben Sie im Bucket-Namen keine sensiblen Informationen an. Der Bucket-Name wird in der URL angezeigt, die auf die Objekte im Bucket verweist. Anmerkung Vor dem 1. März 2018 konnten Buckets, die in der Region USA Ost (Nord-Virginia) erstellt wurden, Namen mit bis zu 255 Zeichen und mit Großbuchstaben und Unterstrichen haben. Ab dem 1. März 2018 müssen neue Buckets in USA Ost (Nord-Virginia) den gleichen Regeln entsprechen, die in allen anderen Regionen angewendet werden. Beispiel für Namen von Buckets für allgemeine Zwecke Die folgenden Bucket-Namen zeigen Beispiele dafür, welche Zeichen in Allzweck-Bucket-Namen zulässig sind: a–z, 0–9 und Bindestriche ( - ). Das reservierte Präfix amzn-s3-demo- wird hier nur zur Veranschaulichung verwendet. Da es sich um ein reserviertes Präfix handelt, können Sie keine Bucket-Namen erstellen, die mit amzn-s3-demo- beginnen. amzn-s3-demo-bucket1-a1b2c3d4-5678-90ab-cdef-example11111 amzn-s3-demo-bucket Die folgenden Beispiel-Bucket-Namen sind gültig, aber nicht für andere Verwendungszwecke als statisches Website-Hosting empfohlen ( . ): example.com www.example.com my.example.s3.bucket Die folgenden Beispiel-Bucket-Namen sind ungültig : amzn_s3_demo_bucket (enthält Unterstriche) AmznS3DemoBucket (enthält Großbuchstaben) amzn-s3-demo-bucket- (beginnt mit einem amzn-s3-demo- -Präfix und endet mit einem Bindestrich) example..com (enthält zwei Punkte hintereinander) 192.168.5.4 (entspricht dem Format einer IP-Adresse) Bewährte Methoden Beachten Sie bei der Benennung Ihrer Allzweck-Buckets die folgenden bewährten Verfahren zur Benennung von Buckets. Wählen Sie ein Bucket-Benennungsschema, bei dem keine Namenskonflikte zu erwarten sind. Wenn Ihre Anwendung automatisch Buckets erstellt, wählen Sie ein Bucket-Benennungsschema, bei dem keine Namenskonflikte zu erwarten sind. Stellen Sie sicher, dass Ihre Anwendungslogik einen anderen Bucket-Namen auswählt, wenn ein Bucket-Name bereits vergeben ist. Globale eindeutige Bezeichner (GUIDs) an Bucket-Namen anhängen Wir empfehlen, Bucket-Namen zu erstellen, die nicht vorhersehbar sind. Schreiben Sie keinen Code in der Annahme, dass der von Ihnen gewählte Bucket-Name verfügbar ist, es sei denn, Sie haben den Bucket bereits erstellt. Eine Methode zum Erstellen nicht vorhersehbarer Bucket-Namen, besteht darin, einen Globally Unique Identifier (GUID) an Ihren Bucket-Namen anzuhängen, beispielsweise, amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111 . Weitere Informationen finden Sie unter Erstellen eines Buckets, der eine GUID im Bucket-Namen verwendet . Vermeiden Sie die Verwendung von Punkten ( . ) in Bucket-Namen Aus Gründen der besten Kompatibilität empfehlen wir, Punkte ( . ) in Bucket-Namen zu vermeiden, mit Ausnahme von Buckets, die nur für statisches Website-Hosting verwendet werden. Wenn Sie Punkte in den Namen eines Buckets einfügen, können Sie die Adressierung im Stil eines virtuellen Hosts über HTTPS nicht verwenden, es sei denn, Sie führen eine eigene Zertifikatsüberprüfung durch. Die Sicherheitszertifikate, die für das virtuelle Hosting von Buckets verwendet werden, funktionieren nicht für Buckets mit Punkten in ihren Namen. Diese Einschränkung gilt nicht für Buckets, die für das Hosting statischer Websites verwendet werden, da das Hosting statischer Websites nur über HTTP möglich ist. Weitere Informationen zur Adressierung im virtuellen Hosting-Stil finden Sie unter Virtuelles Hosting von Allzweck-Buckets . Weitere Hinweise zum Hosten statischer Websites finden Sie unter Hosten einer statischen Website mit Amazon S3 . Auswählen eines relevanten Namens Wenn Sie einen Bucket benennen, empfehlen wir Ihnen, einen Namen zu wählen, der für Sie oder Ihr Unternehmen relevant ist. Vermeiden Sie die Verwendung von Namen, die mit anderen Entitäten verbunden sind. Vermeiden Sie zum Beispiel die Verwendung von AWS oder Amazon in Ihrem Bucket-Namen. Löschen Sie keine Buckets, damit Sie Bucket-Namen wiederverwenden können. Wenn ein Bucket leer ist, können Sie ihn löschen. Nachdem ein Bucket gelöscht wurde, wird der Name zur Wiederverwendung verfügbar. Es ist jedoch nicht garantiert, dass Sie den Namen sofort oder überhaupt wieder verwenden können. Nachdem Sie einen Bucket gelöscht haben, kann einige Zeit vergehen, bis Sie den Namen wieder verwenden können. Außerdem könnte ein anderer AWS-Konto einen Bucket mit demselben Namen erstellen, bevor Sie den Namen wiederverwenden können. Nach dem Löschen eines Allzweck-Buckets ist zu beachten, dass ein anderes AWS-Konto in derselben Partition denselben Bucket-Namen für einen neuen Bucket verwenden kann und daher möglicherweise Anfragen erhält, die für den gelöschten Allzweck-Bucket bestimmt sind. Wenn Sie dies verhindern wollen oder wenn Sie weiterhin denselben Allzweck-Bucket-Namen verwenden wollen, dürfen Sie den Allzweck-Bucket nicht löschen. Wir empfehlen, den Bucket zu leeren und beizubehalten und stattdessen alle Bucket-Anfragen nach Bedarf zu blockieren. Erstellen eines Buckets, der eine GUID im Bucket-Namen verwendet Die folgenden Beispiele zeigen, wie Sie einen Allzweck-Bucket erstellen, der eine GUID am Ende des Bucket-Namens verwendet. Im folgenden AWS CLI-Beispiel wird ein Allzweck-Bucket in der Region USA West (Nordkalifornien) ( us-west-1 ) mit einem Beispiel-Bucket-Namen erstellt, der eine GUID (Globally Unique Identifier) verwendet. Wenn Sie diesen Beispielbefehl verwenden möchten, ersetzen Sie user input placeholders durch Ihre Informationen. aws s3api create-bucket \ --bucket amzn-s3-demo-bucket1 $(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \ --region us-west-1 \ --create-bucket-configuration LocationConstraint= us-west-1 Das folgende Beispiel zeigt Ihnen, wie Sie einen Bucket mit einer GUID am Ende des Bucket-Namens in der Region USA Ost (Nord-Virginia) ( us-east-1 ) mithilfe von AWS SDK für Java erstellen. Wenn Sie dieses Beispiel verwenden möchten, ersetzen Sie die user input placeholders (Platzhalter für Benutzereingaben) durch Ihre Informationen. Weitere Informationen zur Verwendung anderer AWS-SDKs finden Sie unter Werkzeuge für AWS . import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.Bucket; import com.amazonaws.services.s3.model.CreateBucketRequest; import java.util.List; import java.util.UUID; public class CreateBucketWithUUID { public static void main(String[] args) { final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions. US_EAST_1 ).build(); String bucketName = " amzn-s3-demo-bucket " + UUID.randomUUID().toString().replace("-", ""); CreateBucketRequest createRequest = new CreateBucketRequest(bucketName); System.out.println(bucketName); s3.createBucket(createRequest); } } JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Gängige Bucket-Muster Kontingente, Beschränkungen und Einschränkungen Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonCloudFront/latest/DeveloperGuide/lambda-generating-http-responses.html#lambda-generating-http-responses-in-requests | Trabajo con solicitudes y respuestas - Amazon CloudFront Trabajo con solicitudes y respuestas - Amazon CloudFront Documentación Amazon CloudFront Guía para desarrolladores Uso de funciones con conmutación por error de origen Generación de respuestas HTTP en los desencadenadores de solicitud Actualización de respuestas HTTP en desencadenadores de respuesta de origen Acceso al cuerpo de la solicitud con la opción Incluir cuerpo Trabajo con solicitudes y respuestas Para utilizar las solicitudes y respuestas de Lambda@Edge, consulte los temas siguientes: Temas Uso de funciones de Lambda@Edge con conmutación por error de origen Generación de respuestas HTTP en los desencadenadores de solicitud Actualización de respuestas HTTP en desencadenadores de respuesta de origen Acceso al cuerpo de la solicitud con la opción Incluir cuerpo Uso de funciones de Lambda@Edge con conmutación por error de origen Puede utilizar las funciones de Lambda@Edge con distribuciones de CloudFront que ha configurado con grupos de origen, por ejemplo, para conmutación por error de origen que configure para ayudar a garantizar una alta disponibilidad. Para utilizar una función de Lambda con un grupo de origen, especifique la función en una solicitud de origen o un desencadenador de respuesta de origen para un grupo de origen al crear el comportamiento de la caché. Para obtener más información, consulte los siguientes temas: Creación de grupos de origen: Creación de un grupo de origen Cómo utilizar la conmutación por error de origen con Lambda@Edge: Utilizar la conmutación por error de origen con funciones de Lambda@Edge Generación de respuestas HTTP en los desencadenadores de solicitud Cuando CloudFront recibe una solicitud, es posible utilizar una función de Lambda para generar una respuesta HTTP que CloudFront devuelve directamente al lector sin enviarla al origen. La generación de respuestas HTTP reduce la carga en el origen, y normalmente también reduce la latencia para el espectador. Entre las situaciones más comunes para generar respuestas HTTP se incluyen las siguientes: Devolver una pequeña página web al lector Devolver un código de estado HTTP 301 o 302 para redirigir al usuario a otra página web Devolución de un código de estado HTTP 401 al espectador si el usuario no se ha autenticado Una función de Lambda@Edge puede generar una respuesta HTTP cuando ocurren los siguientes eventos de CloudFront: Eventos de solicitud del espectador Cuando un evento de solicitud del lector activa una función, CloudFront devuelve la respuesta al lector y no la almacena en caché. Eventos de solicitud al origen Cuando un evento de solicitud al origen activa una función, CloudFront busca en la caché de borde una respuesta generada previamente por la función. Si la respuesta está en la caché, la función no se ejecuta y CloudFront devuelve al lector la respuesta almacenada en la caché. Si la respuesta no está en la caché, la función se ejecuta, CloudFront devuelve la respuesta al lector y también la almacena en la caché. Para ver algunos ejemplos de código para generar respuestas HTTP, consulte Funciones de ejemplo de Lambda@Edge . También puede sustituir las respuestas HTTP en disparadores de respuesta. Para obtener más información, consulte Actualización de respuestas HTTP en desencadenadores de respuesta de origen . Modelo de programación En esta sección se describe el modelo de programación a seguir para usar Lambda@Edge con el fin de generar respuestas HTTP. Temas Objeto de respuesta Errores Campos obligatorios Objeto de respuesta La respuesta que devuelva como parámetro result del método callback debe tener la siguiente estructura (tenga en cuenta que solo es obligatorio el campo status ). const response = { body: 'content', bodyEncoding: 'text' | 'base64', headers: { 'header name in lowercase': [ { key: 'header name in standard case', value: 'header value' }], ... }, status: 'HTTP status code (string)', statusDescription: 'status description' }; El objeto de respuesta puede incluir los siguientes valores: body El cuerpo, si lo hay, que desea que CloudFront devuelva en la respuesta generada. bodyEncoding La codificación del valor especificado en body . Las únicas codificaciones válidas son text y base64 . Si incluye body en el objeto response pero omite bodyEncoding , CloudFront trata el cuerpo como texto. Si especifica bodyEncoding como base64 pero el cuerpo no tiene una codificación base64 válida, CloudFront devuelve un error. headers Los encabezados que desea que devuelva CloudFront en la respuesta generada. Tenga en cuenta lo siguiente: Las claves del objeto headers son nombres de encabezado HTTP estándar en minúsculas. El uso de claves en minúsculas le proporciona acceso a los valores del encabezado sin diferenciar mayúsculas de minúsculas. Cada encabezado (por ejemplo, headers["accept"] or headers["host"] ) es una matriz de pares clave-valor. Para un encabezado determinado, la matriz contiene un par de clave-valor para cada valor de la respuesta generada. key (opcional) es el nombre del encabezado que diferencia mayúsculas de minúsculas tal como aparece en una solicitud HTTP; por ejemplo, accept u host . Especifique value como un valor de encabezado. Si no incluye la parte de clave de encabezado del par de clave-valor, Lambda@Edge insertará automáticamente una clave de encabezado utilizando el nombre de encabezado que proporcione. Independientemente de cómo haya formateado el nombre del encabezado, la clave de encabezado que se inserta automáticamente se formatea con mayúscula inicial para cada parte, separada por guiones (-). Por ejemplo, puede añadir un encabezado como el siguiente, sin una clave de encabezado: 'content-type': [ { value: 'text/html;charset=UTF-8' }] En este ejemplo, Lambda@Edge crea la siguiente clave de encabezado: Content-Type . Para obtener más información acerca de restricciones de uso de encabezados, consulte Restricciones en funciones de borde . status El código de estado HTTP. Proporcione el código de estado como una cadena. CloudFront utiliza el código de estado proporcionado para lo siguiente: Devolverlo en la respuesta Almacenarlo en la caché de borde de CloudFront cuando la respuesta la generó una función activada por un evento de solicitud al origen Inicie sesión en CloudFront Registros estándar (registros de acceso) Si el valor status no está comprendido entre 200 y 599, CloudFront devuelve un error al lector. statusDescription La descripción que desea que CloudFront devuelva en la respuesta, y que acompañará al código de estado HTTP. No es obligatorio utilizar descripciones estándar, como OK en un código de estado HTTP 200. Errores Los siguientes son posibles errores de respuestas HTTP generadas. La respuesta contiene un cuerpo y especifica un código de estado 204 (Sin contenido Cuando una solicitud del lector activa una función, CloudFront devuelve un código de estado HTTP 502 (Gateway incorrecta) al lector cuando se cumplen las dos condiciones siguientes: El valor de status es 204 (Sin contenido) La respuesta incluye un valor para body Esto se debe a que Lambda@Edge impone la restricción opcional de RFC 2616 que establece que una respuesta HTTP 204 no necesita contener cuerpo de mensaje. Restricciones en el tamaño de la respuesta generada El tamaño máximo de una respuesta generada por una función de Lambda depende del evento que desencadenó la función: Eventos de solicitud del lector : 40 KB Eventos de solicitud al origen : 1 MB Si la respuesta supera el tamaño permitido, CloudFront devuelve un código de estado HTTP 502 (Gateway incorrecta) al lector. Campos obligatorios El campo status es obligatorio. Todos los demás campos son opcionales. Actualización de respuestas HTTP en desencadenadores de respuesta de origen Cuando CloudFront recibe una respuesta HTTP desde el servidor de origen, si existe un desencadenador de respuesta del origen asociado al comportamiento de la caché, es posible modificar la respuesta HTTP para anular lo que ha devuelto el origen. Entre las situaciones más comunes para actualizar respuestas HTTP se incluyen las siguientes: Cambiar el estado para establecer un código de estado HTTP 200 y crear un cuerpo con contenido estático para devolverlo al espectador cuando un origen devuelva un código de estado de error (4xx o 5xx). Para ver código de muestra, consulte Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 . Cambiar el estado para establecer un código de estado HTTP 301 o 302, con objeto de redirigir al usuario a otro sitio web cuando un origen devuelve un código de estado de error (4xx o 5xx). Para ver código de muestra, consulte Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 . nota La función debe devolver un valor de estado entre 200 y 599 (incluidos); de lo contrario, CloudFront devuelve un error al espectador. También puede sustituir las respuestas HTTP en eventos de solicitud al origen y del espectador. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . Cuando trabaja con la respuesta HTTP, Lambda@Edge no expone el cuerpo que devuelve el servidor de origen al desencadenador de respuesta del origen. Puede generar un cuerpo con contenido estático estableciéndolo en el valor deseado, o eliminar el cuerpo dentro de la función estableciendo un valor vacío. Si no actualiza el campo de cuerpo de la función, se devolverá al espectador el cuerpo original devuelto por el servidor de origen. Acceso al cuerpo de la solicitud con la opción Incluir cuerpo A partir de ahora, puede hacer que Lambda@Edge exponga el cuerpo de una solicitud en los métodos HTTP que permiten la escritura (POST, PUT, DELETE, etc.) para que puede tener acceso a él en la función de Lambda. Puede elegir acceso de solo lectura o puede especificar que sustituirá el cuerpo. Para habilitar esta opción, elija Incluir cuerpo al crear un desencadenador de CloudFront para la función que corresponde a un evento de solicitud al origen o del lector. Para obtener más información, consulte Adición de desencadenadores para una función de Lambda@Edge ; para obtener información acerca de cómo utilizar Incluir cuerpo con su función, consulte Estructura de eventos de Lambda@Edge . Entre los escenarios en los que es conveniente utilizar esta característica se incluyen los siguientes: Procesamiento de formularios web, como formularios de tipo "póngase en contacto con nosotros", sin devolver los datos de entrada de los clientes a los servidores de origen. Recopilación de datos de balizas web enviados por los navegadores de los espectadores y que se procesan en el borde. Para ver código de muestra, consulte Funciones de ejemplo de Lambda@Edge . nota Si el cuerpo de la solicitud es grande, Lambda@Edge lo trunca. Para obtener información detallada sobre el tamaño máximo y el truncamiento, consulte Restricciones para el cuerpo de la solicitud con la opción Incluir cuerpo . JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento Estructura de evento Funciones de ejemplo ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#edit-service-linked-role | Create a service-linked role - AWS Identity and Access Management Create a service-linked role - AWS Identity and Access Management Documentation AWS Identity and Access Management User Guide Service-linked role permissions Indirect permissions Create a service-linked role A service-linked role is a unique type of IAM role that is linked directly to an AWS service. Service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on your behalf. The linked service also defines how you create, modify, and delete a service-linked role. A service might automatically create or delete the role. It might allow you to create, modify, or delete the role as part of a wizard or process in the service. Or it might require that you use IAM to create or delete the role. Regardless of the method, service-linked roles simplify the process of setting up a service because you don't have to manually add permissions for the service to complete actions on your behalf. Note Remember that service roles are different from service-linked roles. A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide . A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. The linked service defines the permissions of its service-linked roles, and unless defined otherwise, only that service can assume the roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity. Before you can delete the roles, you must first delete their related resources. This helps prevent you from inadvertently removing permission to access the resources. Tip For information about which services support using service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions You must configure permissions for an IAM entity (user or role) to allow the user or role to create or edit the service-linked role. Note The ARN for a service-linked role includes a service principal, which is indicated in the policies below as SERVICE-NAME .amazonaws.com . Do not try to guess the service principal, because it is case sensitive and the format can vary across AWS services. To view the service principal for a service, see its service-linked role documentation. To allow an IAM entity to create a specific service-linked role Add the following policy to the IAM entity that needs to create the service-linked role. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/ SERVICE-NAME .amazonaws.com/ SERVICE-LINKED-ROLE-NAME-PREFIX *", "Condition": { "StringLike": { "iam:AWSServiceName": " SERVICE-NAME .amazonaws.com"}} }, { "Effect": "Allow", "Action": [ "iam:AttachRolePolicy", "iam:PutRolePolicy" ], "Resource": "arn:aws:iam::*:role/aws-service-role/ SERVICE-NAME .amazonaws.com/ SERVICE-LINKED-ROLE-NAME-PREFIX *" } ] } To allow an IAM entity to create any service-linked role Add the following statement to the permissions policy for the IAM entity that needs to create a service-linked role, or any service role that includes the needed policies. This policy statement does not allow the IAM entity to attach a policy to the role. { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/*" } To allow an IAM entity to edit the description of any service roles Add the following statement to the permissions policy for the IAM entity that needs to edit the description of a service-linked role, or any service role. { "Effect": "Allow", "Action": "iam:UpdateRoleDescription", "Resource": "arn:aws:iam::*:role/aws-service-role/*" } To allow an IAM entity to delete a specific service-linked role Add the following statement to the permissions policy for the IAM entity that needs to delete the service-linked role. { "Effect": "Allow", "Action": [ "iam:DeleteServiceLinkedRole", "iam:GetServiceLinkedRoleDeletionStatus" ], "Resource": "arn:aws:iam::*:role/aws-service-role/ SERVICE-NAME .amazonaws.com/ SERVICE-LINKED-ROLE-NAME-PREFIX *" } To allow an IAM entity to delete any service-linked role Add the following statement to the permissions policy for the IAM entity that needs to delete a service-linked role, but not service role. { "Effect": "Allow", "Action": [ "iam:DeleteServiceLinkedRole", "iam:GetServiceLinkedRoleDeletionStatus" ], "Resource": "arn:aws:iam::*:role/aws-service-role/*" } To allow an IAM entity to pass an existing role to the service Some AWS services allow you to pass an existing role to the service, instead of creating a new service-linked role. To do this, a user must have permissions to pass the role to the service. Add the following statement to the permissions policy for the IAM entity that needs to pass a role. This policy statement also allows the entity to view a list of roles from which they can choose the role to pass. For more information, see Grant a user permissions to pass a role to an AWS service . { "Sid": "PolicyStatementToAllowUserToListRoles", "Effect": "Allow", "Action": ["iam:ListRoles"], "Resource": "*" }, { "Sid": "PolicyStatementToAllowUserToPassOneSpecificRole", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "arn:aws:iam:: account-id :role/ my-role-for-XYZ " } Indirect permissions with service-linked roles The permissions granted by a service-linked role can be indirectly transferred to other users and roles. When a service-linked role is used by an AWS service, that service-linked role can use its own permissions to call other AWS services. This means that users and roles with permissions to call a service that uses a service-linked role may have indirect access to services that can be accessed by that service-linked role. For example, when you create an Amazon RDS DB instance, a service-linked role for RDS is automatically created if one does not already exist. This service-linked role allows RDS to call Amazon EC2, Amazon SNS, Amazon CloudWatch Logs, and Amazon Kinesis on your behalf. If you allow users and roles in your account to modify or create RDS databases, then they may be able to indirectly interact with Amazon EC2, Amazon SNS, Amazon CloudWatch Logs logs, and Amazon Kinesis resources by calling RDS, as RDS would use it’s service-linked role to access those resources. Methods to create a service-linked role The method that you use to create a service-linked role depends on the service. In some cases, you don't need to manually create a service-linked role. For example, when you complete a specific action (such as creating a resource) in the service, the service might create the service-linked role for you. Or if you were using a service before it began supporting service-linked roles, then the service might have automatically created the role in your account. To learn more, see A new role appeared in my AWS account . In other cases, the service might support creating a service-linked role manually using the service console, API, or CLI. For information about which services support using service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-Linked Role column. To learn whether the service supports creating the service-linked role, choose the Yes link to view the service-linked role documentation for that service. If the service does not support creating the role, then you can use IAM to create the service-linked role. Important Service-linked roles count toward your IAM roles in an AWS account limit, but if you have reached your limit, you can still create service-linked roles in your account. Only service-linked roles can exceed the limit. Creating a service-linked role (console) Before you create a service-linked role in IAM, find out whether the linked service automatically creates service-linked roles, In addition, learn whether you can create the role from the service's console, API, or CLI. To create a service-linked role (console) Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/ . In the navigation pane of the IAM console, choose Roles . Then, choose Create role . Choose the AWS Service role type. Choose the use case for your service. Use cases are defined by the service to include the trust policy required by the service. Then, choose Next . Choose one or more permissions policies to attach to the role. Depending on the use case that you selected, the service might do any of the following: Define the permissions used by the role. Allow you to choose from a limited set of permissions. Allow you to choose from any permissions. Allow you to select no policies at this time, create the policies later, and then attach them to the role. Select the checkbox next to the policy that assigns the permissions that you want the role to have, and then choose Next . Note The permissions that you specify are available to any entity that uses the role. By default, a role has no permissions. For Role name , the degree of role name customization is defined by the service. If the service defines the role's name, then this option is not editable. In other cases, the service might define a prefix for the role and let you enter an optional suffix. If possible, enter a role name suffix to add to the default name. This suffix helps you identify the purpose of this role. Role names must be unique within your AWS account. They are not distinguished by case. For example, you cannot create roles named both <service-linked-role-name>_SAMPLE and <service-linked-role-name>_sample . Because various entities might reference the role, you cannot edit the name of the role after it has been created. (Optional) For Description , edit the description for the new service-linked role. You cannot attach tags to service-linked roles during creation. For more information about using tags in IAM, see Tags for AWS Identity and Access Management resources . Review the role and then choose Create role . Creating a service-linked role (AWS CLI) Before creating a service-linked role in IAM, find out whether the linked service automatically creates service-linked roles and whether you can create the role from the service's CLI. If the service CLI is not supported, you can use IAM commands to create a service-linked role with the trust policy and inline policies that the service needs to assume the role. To create a service-linked role (AWS CLI) Run the following command: aws iam create-service-linked-role --aws-service-name SERVICE-NAME .amazonaws.com Creating a service-linked role (AWS API) Before creating a service-linked role in IAM, find out whether the linked service automatically creates service-linked roles and whether you can create the role from the service's API. If the service API is not supported, you can use the AWS API to create a service-linked role with the trust policy and inline policies that the service needs to assume the role. To create a service-linked role (AWS API) Use the CreateServiceLinkedRole API call. In the request, specify a service name of SERVICE_NAME_URL .amazonaws.com . For example, to create the Lex Bots service-linked role, use lex.amazonaws.com . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Create a role for an AWS service Create a role for identity federation Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/IAM/latest/UserGuide/using-service-linked-roles.html#service-linked-role-permissions | Create a service-linked role - AWS Identity and Access Management Create a service-linked role - AWS Identity and Access Management Documentation AWS Identity and Access Management User Guide Service-linked role permissions Indirect permissions Create a service-linked role A service-linked role is a unique type of IAM role that is linked directly to an AWS service. Service-linked roles are predefined by the service and include all the permissions that the service requires to call other AWS services on your behalf. The linked service also defines how you create, modify, and delete a service-linked role. A service might automatically create or delete the role. It might allow you to create, modify, or delete the role as part of a wizard or process in the service. Or it might require that you use IAM to create or delete the role. Regardless of the method, service-linked roles simplify the process of setting up a service because you don't have to manually add permissions for the service to complete actions on your behalf. Note Remember that service roles are different from service-linked roles. A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide . A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. The linked service defines the permissions of its service-linked roles, and unless defined otherwise, only that service can assume the roles. The defined permissions include the trust policy and the permissions policy, and that permissions policy cannot be attached to any other IAM entity. Before you can delete the roles, you must first delete their related resources. This helps prevent you from inadvertently removing permission to access the resources. Tip For information about which services support using service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link to view the service-linked role documentation for that service. Service-linked role permissions You must configure permissions for an IAM entity (user or role) to allow the user or role to create or edit the service-linked role. Note The ARN for a service-linked role includes a service principal, which is indicated in the policies below as SERVICE-NAME .amazonaws.com . Do not try to guess the service principal, because it is case sensitive and the format can vary across AWS services. To view the service principal for a service, see its service-linked role documentation. To allow an IAM entity to create a specific service-linked role Add the following policy to the IAM entity that needs to create the service-linked role. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/ SERVICE-NAME .amazonaws.com/ SERVICE-LINKED-ROLE-NAME-PREFIX *", "Condition": { "StringLike": { "iam:AWSServiceName": " SERVICE-NAME .amazonaws.com"}} }, { "Effect": "Allow", "Action": [ "iam:AttachRolePolicy", "iam:PutRolePolicy" ], "Resource": "arn:aws:iam::*:role/aws-service-role/ SERVICE-NAME .amazonaws.com/ SERVICE-LINKED-ROLE-NAME-PREFIX *" } ] } To allow an IAM entity to create any service-linked role Add the following statement to the permissions policy for the IAM entity that needs to create a service-linked role, or any service role that includes the needed policies. This policy statement does not allow the IAM entity to attach a policy to the role. { "Effect": "Allow", "Action": "iam:CreateServiceLinkedRole", "Resource": "arn:aws:iam::*:role/aws-service-role/*" } To allow an IAM entity to edit the description of any service roles Add the following statement to the permissions policy for the IAM entity that needs to edit the description of a service-linked role, or any service role. { "Effect": "Allow", "Action": "iam:UpdateRoleDescription", "Resource": "arn:aws:iam::*:role/aws-service-role/*" } To allow an IAM entity to delete a specific service-linked role Add the following statement to the permissions policy for the IAM entity that needs to delete the service-linked role. { "Effect": "Allow", "Action": [ "iam:DeleteServiceLinkedRole", "iam:GetServiceLinkedRoleDeletionStatus" ], "Resource": "arn:aws:iam::*:role/aws-service-role/ SERVICE-NAME .amazonaws.com/ SERVICE-LINKED-ROLE-NAME-PREFIX *" } To allow an IAM entity to delete any service-linked role Add the following statement to the permissions policy for the IAM entity that needs to delete a service-linked role, but not service role. { "Effect": "Allow", "Action": [ "iam:DeleteServiceLinkedRole", "iam:GetServiceLinkedRoleDeletionStatus" ], "Resource": "arn:aws:iam::*:role/aws-service-role/*" } To allow an IAM entity to pass an existing role to the service Some AWS services allow you to pass an existing role to the service, instead of creating a new service-linked role. To do this, a user must have permissions to pass the role to the service. Add the following statement to the permissions policy for the IAM entity that needs to pass a role. This policy statement also allows the entity to view a list of roles from which they can choose the role to pass. For more information, see Grant a user permissions to pass a role to an AWS service . { "Sid": "PolicyStatementToAllowUserToListRoles", "Effect": "Allow", "Action": ["iam:ListRoles"], "Resource": "*" }, { "Sid": "PolicyStatementToAllowUserToPassOneSpecificRole", "Effect": "Allow", "Action": [ "iam:PassRole" ], "Resource": "arn:aws:iam:: account-id :role/ my-role-for-XYZ " } Indirect permissions with service-linked roles The permissions granted by a service-linked role can be indirectly transferred to other users and roles. When a service-linked role is used by an AWS service, that service-linked role can use its own permissions to call other AWS services. This means that users and roles with permissions to call a service that uses a service-linked role may have indirect access to services that can be accessed by that service-linked role. For example, when you create an Amazon RDS DB instance, a service-linked role for RDS is automatically created if one does not already exist. This service-linked role allows RDS to call Amazon EC2, Amazon SNS, Amazon CloudWatch Logs, and Amazon Kinesis on your behalf. If you allow users and roles in your account to modify or create RDS databases, then they may be able to indirectly interact with Amazon EC2, Amazon SNS, Amazon CloudWatch Logs logs, and Amazon Kinesis resources by calling RDS, as RDS would use it’s service-linked role to access those resources. Methods to create a service-linked role The method that you use to create a service-linked role depends on the service. In some cases, you don't need to manually create a service-linked role. For example, when you complete a specific action (such as creating a resource) in the service, the service might create the service-linked role for you. Or if you were using a service before it began supporting service-linked roles, then the service might have automatically created the role in your account. To learn more, see A new role appeared in my AWS account . In other cases, the service might support creating a service-linked role manually using the service console, API, or CLI. For information about which services support using service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-Linked Role column. To learn whether the service supports creating the service-linked role, choose the Yes link to view the service-linked role documentation for that service. If the service does not support creating the role, then you can use IAM to create the service-linked role. Important Service-linked roles count toward your IAM roles in an AWS account limit, but if you have reached your limit, you can still create service-linked roles in your account. Only service-linked roles can exceed the limit. Creating a service-linked role (console) Before you create a service-linked role in IAM, find out whether the linked service automatically creates service-linked roles, In addition, learn whether you can create the role from the service's console, API, or CLI. To create a service-linked role (console) Sign in to the AWS Management Console and open the IAM console at https://console.aws.amazon.com/iam/ . In the navigation pane of the IAM console, choose Roles . Then, choose Create role . Choose the AWS Service role type. Choose the use case for your service. Use cases are defined by the service to include the trust policy required by the service. Then, choose Next . Choose one or more permissions policies to attach to the role. Depending on the use case that you selected, the service might do any of the following: Define the permissions used by the role. Allow you to choose from a limited set of permissions. Allow you to choose from any permissions. Allow you to select no policies at this time, create the policies later, and then attach them to the role. Select the checkbox next to the policy that assigns the permissions that you want the role to have, and then choose Next . Note The permissions that you specify are available to any entity that uses the role. By default, a role has no permissions. For Role name , the degree of role name customization is defined by the service. If the service defines the role's name, then this option is not editable. In other cases, the service might define a prefix for the role and let you enter an optional suffix. If possible, enter a role name suffix to add to the default name. This suffix helps you identify the purpose of this role. Role names must be unique within your AWS account. They are not distinguished by case. For example, you cannot create roles named both <service-linked-role-name>_SAMPLE and <service-linked-role-name>_sample . Because various entities might reference the role, you cannot edit the name of the role after it has been created. (Optional) For Description , edit the description for the new service-linked role. You cannot attach tags to service-linked roles during creation. For more information about using tags in IAM, see Tags for AWS Identity and Access Management resources . Review the role and then choose Create role . Creating a service-linked role (AWS CLI) Before creating a service-linked role in IAM, find out whether the linked service automatically creates service-linked roles and whether you can create the role from the service's CLI. If the service CLI is not supported, you can use IAM commands to create a service-linked role with the trust policy and inline policies that the service needs to assume the role. To create a service-linked role (AWS CLI) Run the following command: aws iam create-service-linked-role --aws-service-name SERVICE-NAME .amazonaws.com Creating a service-linked role (AWS API) Before creating a service-linked role in IAM, find out whether the linked service automatically creates service-linked roles and whether you can create the role from the service's API. If the service API is not supported, you can use the AWS API to create a service-linked role with the trust policy and inline policies that the service needs to assume the role. To create a service-linked role (AWS API) Use the CreateServiceLinkedRole API call. In the request, specify a service name of SERVICE_NAME_URL .amazonaws.com . For example, to create the Lex Bots service-linked role, use lex.amazonaws.com . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Create a role for an AWS service Create a role for identity federation Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/IAM/latest/UserGuide/id_roles_terms-and-concepts.html#iam-term-service-linked-role | IAM roles - AWS Identity and Access Management IAM roles - AWS Identity and Access Management Documentation AWS Identity and Access Management User Guide When to create an IAM user (instead of a role) Roles terms and concepts Additional resources IAM roles An IAM role is an IAM identity that you can create in your account that has specific permissions. An IAM role is similar to an IAM user, in that it is an AWS identity with permission policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. You can use roles to delegate access to users, applications, or services that don't normally have access to your AWS resources. For example, you might want to grant users in your AWS account access to resources they don't usually have, or grant users in one AWS account access to resources in another account. Or you might want to allow a mobile app to use AWS resources, but not want to embed AWS keys within the app (where they can be difficult to update and where users can potentially extract them). Sometimes you want to give AWS access to users who already have identities defined outside of AWS, such as in your corporate directory. Or, you might want to grant access to your account to third parties so that they can perform an audit on your resources. For these scenarios, you can delegate access to AWS resources using an IAM role . This section introduces roles and the different ways you can use them, when and how to choose among approaches, and how to create, manage, switch to (or assume), and delete roles. Note When you first create your AWS account, no roles are created by default. As you add services to your account, they may add service-linked roles to support their use cases. A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. Before you can delete service-linked roles you must first delete their related resources. This protects your resources because you can't inadvertently remove permission to access the resources. For information about which services support using service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link to view the service-linked role documentation for that service. Topics When to create an IAM user (instead of a role) Roles terms and concepts Additional resources The confused deputy problem Common scenarios for IAM roles IAM role creation IAM role management Methods to assume a role When to create an IAM user (instead of a role) We recommend you only use IAM users for use cases not supported by identity federation. Some of the use cases include the following: Workloads that cannot use IAM roles – You might run a workload from a location that needs to access AWS. In some situations, you can't use IAM roles to provide temporary credentials, such as for WordPress plugins. In these situations, use IAM user long-term access keys for that workload to authenticate to AWS. Third-party AWS clients – If you are using tools that don’t support access with IAM Identity Center, such as third-party AWS clients or vendors that aren't hosted on AWS, use IAM user long-term access keys. AWS CodeCommit access – If you are using CodeCommit to store your code, you can use an IAM user with either SSH keys or service-specific credentials for CodeCommit to authenticate to your repositories. We recommend that you do this in addition to using a user in IAM Identity Center for normal authentication. Users in IAM Identity Center are the people in your workforce who need access to your AWS accounts or to your cloud applications. To give users access to your CodeCommit repositories without configuring IAM users, you can configure the git-remote-codecommit utility. For more information about IAM and CodeCommit, see IAM credentials for CodeCommit: Git credentials, SSH keys, and AWS access keys . For more information about configuring the git-remote-codecommit utility, see Connecting to AWS CodeCommit repositories with rotating credentials in the AWS CodeCommit User Guide . Amazon Keyspaces (for Apache Cassandra) access – In a situation where you are unable to use users in IAM Identity Center, such as for testing purposes for Cassandra compatibility, you can use an IAM user with service-specific credentials to authenticate with Amazon Keyspaces. Users in IAM Identity Center are the people in your workforce who need access to your AWS accounts or to your cloud applications. You can also connect to Amazon Keyspaces using temporary credentials. For more information, see Using temporary credentials to connect to Amazon Keyspaces using an IAM role and the SigV4 plugin in the Amazon Keyspaces (for Apache Cassandra) Developer Guide . Emergency access – In a situation where you can't access your identity provider and you must take action in your AWS account. Establishing emergency access IAM users can be part of your resiliency plan. We recommend that the emergency user credentials be tightly controlled and secured using multi-factor authentication (MFA). Roles terms and concepts Here are some basic terms to help you get started with roles. Role An IAM identity that you can create in your account that has specific permissions. An IAM role has some similarities to an IAM user. Roles and users are both AWS identities with permissions policies that determine what the identity can and cannot do in AWS. However, instead of being uniquely associated with one person, a role is intended to be assumable by anyone who needs it. Also, a role does not have standard long-term credentials such as a password or access keys associated with it. Instead, when you assume a role, it provides you with temporary security credentials for your role session. Roles can be assumed by the following: An IAM user in the same AWS account or another AWS account IAM roles in the same account Service principals, for use with AWS services and features like: Services that allow you to run code on compute services, like Amazon EC2 or AWS Lambda Features that perform actions to your resources on your behalf, like Amazon S3 object replication Services that deliver temporary security credentials to your applications that run outside of AWS, such as IAM Roles Anywhere or Amazon ECS Anywhere An external user authenticated by an external identity provider (IdP) service that is compatible with SAML 2.0 or OpenID Connect AWS service role A service role is an IAM role that a service assumes to perform actions on your behalf. An IAM administrator can create, modify, and delete a service role from within IAM. For more information, see Create a role to delegate permissions to an AWS service in the IAM User Guide . AWS service-linked role A service-linked role is a type of service role that is linked to an AWS service. The service can assume the role to perform an action on your behalf. Service-linked roles appear in your AWS account and are owned by the service. An IAM administrator can view, but not edit the permissions for service-linked roles. Note If you are already using a service when it begins supporting service-linked roles, you might receive an email announcing a new role in your account. In this case, the service automatically created the service-linked role in your account. You don't need to take any action to support this role, and you should not manually delete it. For more information, see A new role appeared in my AWS account . For information about which services support using service-linked roles, see AWS services that work with IAM and look for the services that have Yes in the Service-Linked Role column. Choose a Yes with a link to view the service-linked role documentation for that service. For more information, see Create a service-linked role . Role chaining Role chaining is when you use a role to assume a second role. You can perform role chaining through the AWS Management Console by switching roles, the AWS CLI, or API. For example, RoleA has permission to assume RoleB . You can enable User1 to assume RoleA by using their long-term user credentials in the AssumeRole API operation. This returns RoleA short-term credentials. With role chaining, you can use RoleA 's short-term credentials to enable User1 to assume RoleB . When you assume a role, you can pass a session tag and set the tag as transitive. Transitive session tags are passed to all subsequent sessions in a role chain. To learn more about session tags, see Pass session tags in AWS STS . Role chaining limits your AWS Management Console, AWS CLI or AWS API role session to a maximum of one hour. It applies regardless of the maximum session duration configured for individual roles. When you use the AssumeRole API operation to assume a role, you can specify the duration of your role session with the DurationSeconds parameter. You can specify a parameter value of up to 43200 seconds (12 hours), depending on the maximum session duration setting for your role. However, if you assume a role using role chaining and provide a DurationSeconds parameter value greater than one hour, the operation fails. For information about switching to a role in the AWS Management Console, see Switch from a user to an IAM role (console) . Delegation The granting of permissions to someone to allow access to resources that you control. Delegation involves setting up a trust between two accounts. The first is the account that owns the resource (the trusting account). The second is the account that contains the users that need to access the resource (the trusted account). The trusted and trusting accounts can be any of the following: The same account. Separate accounts that are both under your organization's control. Two accounts owned by different organizations. To delegate permission to access a resource, you create an IAM role in the trusting account that has two policies attached. The permissions policy grants the user of the role the needed permissions to carry out the intended tasks on the resource. The trust policy specifies which trusted account members are allowed to assume the role. When you create a trust policy, you cannot specify a wildcard (*) as part of and ARN in the principal element. The trust policy is attached to the role in the trusting account, and is one-half of the permissions. The other half is a permissions policy attached to the user in the trusted account that allows that user to switch to, or assume the role . A user who assumes a role temporarily gives up his or her own permissions and instead takes on the permissions of the role. When the user exits, or stops using the role, the original user permissions are restored. An additional parameter called external ID helps ensure secure use of roles between accounts that are not controlled by the same organization. Trust policy A JSON policy document in which you define the principals that you trust to assume the role. A role trust policy is a required resource-based policy that is attached to a role in IAM. The principals that you can specify in the trust policy include users, roles, accounts, and services. For more information, see How to use trust policies in IAM roles in AWS Security Blog . Role for cross-account access A role that grants access to resources in one account to a trusted principal in a different account. Roles are the primary way to grant cross-account access. However, some AWS services allow you to attach a policy directly to a resource (instead of using a role as a proxy). These are called resource-based policies, and you can use them to grant principals in another AWS account access to the resource. Some of these resources include Amazon Simple Storage Service (S3) buckets, Amazon Glacier vaults, Amazon Simple Notification Service (SNS) topics, and Amazon Simple Queue Service (SQS) queues. To learn which services support resource-based policies, see AWS services that work with IAM . For more information about resource-based policies, see Cross account resource access in IAM . Additional resources The following resources can help you learn more about IAM terminology related to IAM roles. Principals are entities in AWS that can perform actions and access resources. A principal can be an AWS account root user, an IAM user, or a role. A principal that represents the identity of an AWS service is a service principal . Use the Principal element in role trust policies to define the principals that you trust to assume the role. For more information and examples of principals you can allow to assume a role, see AWS JSON policy elements: Principal . Identity federation creates a trust relationship between an external identity provider and AWS. You can use your existing OpenID Connect (OIDC) or Security Assertion Markup Language (SAML) 2.0 provider to manage who can access AWS resources. When you use OIDC and SAML 2.0 to configure a trust relationship between these external identity providers and AWS , the user is assigned to an IAM role. The user also receives temporary credentials that allow the user to access your AWS resources. For more information about federated principals, see Identity providers and federation into AWS . Federated principals are existing identities from Directory Service, your enterprise user directory, or an OIDC provider. AWS assigns a role to a federated principal when access is requested through an identity provider . For more information about SAML and OIDC federated principals, see Federated user sessions and roles . Permissions policies are identity-based policies that define what actions and resources the role can use. The document is written according to the rules of the IAM policy language. For more information, see IAM JSON policy reference . Permissions boundaries are an advanced feature in which you use policies to limit the maximum permissions that an identity-based policy can grant to a role. You cannot apply a permissions boundary to a service-linked role. For more information, see Permissions boundaries for IAM entities . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Delete an IAM group The confused deputy problem Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://support.microsoft.com/el-gr/account-billing/%CF%80%CF%81%CE%B9%CE%BD-%CE%B1%CF%80%CF%8C-%CF%84%CE%B7%CE%BD-%CE%B1%CE%BD%CE%B1%CE%BA%CF%8D%CE%BA%CE%BB%CF%89%CF%83%CE%B7-%CF%84%CE%B7%CE%BD-%CF%80%CF%8E%CE%BB%CE%B7%CF%83%CE%B7-%CE%AE-%CF%84%CE%B7-%CE%B4%CF%89%CF%81%CE%B5%CE%AC-%CF%84%CE%BF%CF%85-%CF%85%CF%80%CE%BF%CE%BB%CE%BF%CE%B3%CE%B9%CF%83%CF%84%CE%AE-xbox-%CE%AE-windows-%CF%83%CE%B1%CF%82-78ee8071-c8ab-40c4-1d89-f708582062e4 | Πριν από την ανακύκλωση, την πώληση ή τη δωρεά του υπολογιστή Xbox ή Windows σας - Υποστήριξη της Microsoft Σχετικά θέματα × Προστασία, ασφάλεια και προστασία προσωπικών δεδομένων των Windows Επισκόπηση Επισκόπηση προστασίας, ασφάλειας και προστασίας προσωπικών δεδομένων ασφάλεια των Windows Λήψη βοήθειας για την ασφάλεια των Windows Παραμείνετε προστατευμένοι με την ασφάλεια των Windows Πριν από την ανακύκλωση, την πώληση ή τη δωρεά του υπολογιστή Xbox ή Windows σας Κατάργηση λογισμικού κακόβουλης λειτουργίας από τον προσωπικό υπολογιστή Windows σας Ασφάλεια των Windows Λήψη βοήθειας με την ασφάλεια των Windows Προβολή και διαγραφή ιστορικού περιήγησης στο Microsoft Edge Διαγραφή και διαχείριση cookies Ασφαλής κατάργηση του πολύτιμου περιεχομένου σας κατά την επανεγκατάσταση των Windows Εύρεση και κλείδωμα χαμένης συσκευής Windows Προστασία προσωπικών δεδομένων των Windows Λήψη βοήθειας για την προστασία προσωπικών δεδομένων των Windows Ρυθμίσεις προστασίας προσωπικών δεδομένων των Windows που χρησιμοποιούνται από εφαρμογές Προβολή των δεδομένων σας στον πίνακα εργαλείων προστασίας προσωπικών δεδομένων Μετάβαση στο κύριο περιεχόμενο Microsoft Υποστήριξη Υποστήριξη Υποστήριξη Αρχική Microsoft 365 Office Προϊόντα Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows περισσότερα ... Συσκευές Surface Αξεσουάρ υπολογιστή Xbox Παιχνίδι σε υπολογιστή HoloLens Surface Hub Εγγυήσεις υλικού Λογαριασμός και χρέωση λογαριασμός Microsoft Store και χρέωση Πόροι Τι νέο υπάρχει Φόρουμ κοινότητας Διαχειριστές του Microsoft 365 Πύλη για μικρές επιχειρήσεις Προγραμματιστής Εκπαίδευση Αναφορά απάτης υποστήριξης Ασφάλεια προϊόντος Περισσότερα Αγορά του Microsoft 365 Όλη η Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Υποστήριξη Λογισμικό Λογισμικό Εφαρμογές Windows Τεχνητή νοημοσύνη OneDrive Outlook Μετάβαση από το Skype στο Teams OneNote Microsoft Teams Υπολογιστές και συσκευές Υπολογιστές και συσκευές Αξεσουάρ υπολογιστή Ψυχαγωγία Ψυχαγωγία Xbox Game Pass Ultimate Xbox Game Pass Essential Xbox και παιχνίδια Παιχνίδια για υπολογιστή Επαγγελματίες Επαγγελματίες Ασφάλεια Microsoft Azure Dynamics 365 Microsoft 365 για Επιχειρήσεις Microsoft Industry Microsoft Power Platform Windows 365 Προγραμματιστής και IT Προγραμματιστής και IT Πρόγραμμα για προγραμματιστές της Microsoft Microsoft Learn Υποστήριξη για εφαρμογές αγοράς AI Τεχνολογική κοινότητα της Microsoft Microsoft Marketplace Visual Studio Marketplace Rewards Άλλα Άλλα Δωρεάν στοιχεία λήψης και ασφάλεια Εκπαίδευση Δωροκάρτες Προβολή χάρτη τοποθεσίας Αναζήτηση Αναζήτηση βοήθειας Δεν υπάρχουν αποτελέσματα Άκυρο Είσοδος Είσοδος με Microsoft Είσοδος ή δημιουργία λογαριασμού. Γεια σας, Επιλέξτε διαφορετικό λογαριασμό. Έχετε πολλούς λογαριασμούς Επιλέξτε τον λογαριασμό με τον οποίο θέλετε να εισέλθετε. Σχετικά θέματα Προστασία, ασφάλεια και προστασία προσωπικών δεδομένων των Windows Επισκόπηση Επισκόπηση προστασίας, ασφάλειας και προστασίας προσωπικών δεδομένων ασφάλεια των Windows Λήψη βοήθειας για την ασφάλεια των Windows Παραμείνετε προστατευμένοι με την ασφάλεια των Windows Πριν από την ανακύκλωση, την πώληση ή τη δωρεά του υπολογιστή Xbox ή Windows σας Κατάργηση λογισμικού κακόβουλης λειτουργίας από τον προσωπικό υπολογιστή Windows σας Ασφάλεια των Windows Λήψη βοήθειας με την ασφάλεια των Windows Προβολή και διαγραφή ιστορικού περιήγησης στο Microsoft Edge Διαγραφή και διαχείριση cookies Ασφαλής κατάργηση του πολύτιμου περιεχομένου σας κατά την επανεγκατάσταση των Windows Εύρεση και κλείδωμα χαμένης συσκευής Windows Προστασία προσωπικών δεδομένων των Windows Λήψη βοήθειας για την προστασία προσωπικών δεδομένων των Windows Ρυθμίσεις προστασίας προσωπικών δεδομένων των Windows που χρησιμοποιούνται από εφαρμογές Προβολή των δεδομένων σας στον πίνακα εργαλείων προστασίας προσωπικών δεδομένων Πριν από την ανακύκλωση, την πώληση ή τη δωρεά του υπολογιστή Xbox ή Windows σας Ισχύει για Microsoft account Windows 11 Windows 10 Πίνακας εργαλείων για λογαριασμούς Microsoft Αν σκοπεύετε να ανακυκλώσετε, να πουλήσετε ή να χαρήσετε τη συσκευή Windows ή τον κονσόλα Xbox One σας, βεβαιωθείτε ότι έχετε καταργήσει όλες τις προσωπικές πληροφορίες από αυτήν. Επαναφορά συσκευής Windows Δημιουργήστε αντίγραφα ασφαλείας των πληροφοριών που θέλετε να αποθηκεύσετε χρησιμοποιώντας Πρόγραμμα αντιγράφων ασφαλείας των Windows. Αφού δημιουργήσετε αντίγραφα ασφαλείας για τις πληροφορίες που χρειάζεστε, ανοίξτε τις ρυθμίσεις αποκατάστασης: Στην εφαρμογή Ρυθμίσεις στη συσκευή Windows, επιλέξτε Σύστημα > Αποκατάσταση ή χρησιμοποιήστε την ακόλουθη συντόμευση: Άνοιγμα ρυθμίσεων αποκατάστασης Σημείωση: Στο Windows 10, μπορείτε να αποκτήσετε πρόσβαση σε αυτό από την Ενημέρωση & Αποκατάσταση > ασφάλειας . Στην περιοχή: Επαναφορά αυτού του υπολογιστή , επιλέξτε: Έναρξη και, στη συνέχεια, ακολουθήστε τις οδηγίες που εμφανίζονται στην οθόνη. Επιλέξτε Επαναφορά υπολογιστή. Επιλέξτε από τις επιλογές ή/και τις ρυθμίσεις στον πίνακα επιλογών επαναφοράς. Επαναφορά xbox Δημιουργία αντιγράφων ασφαλείας των ρυθμίσεών σας Επαναφορά της κονσόλας στις εργοστασιακές προεπιλογές στο Xbox Κατάργηση μιας συσκευής από τον λογαριασμό σας Microsoft Αφού δημιουργήσετε αντίγραφα ασφαλείας και επαναφέρετε τη συσκευή σας, θα πρέπει να την καταργήσετε από τον λογαριασμό σας Microsoft. Ορίστε πώς: Μεταβείτε στην τοποθεσία https://account.microsoft.com/devices , πραγματοποιήστε είσοδο και βρείτε τη συσκευή που θέλετε να καταργήσετε. Επιλέξτε Εμφάνιση λεπτομερειών για να δείτε πληροφορίες για τη συγκεκριμένη συσκευή. Κάτω από το όνομα της συσκευής, επιλέξτε Περισσότερες ενέργειες > Κατάργηση . Ελέγξτε τις λεπτομέρειες της συσκευής σας, επιλέξτε το πλαίσιο ελέγχου Είμαι έτοιμος να καταργήσω αυτήν τη συσκευή και κατόπιν Κατάργηση . Στη συνέχεια, μπορείτε να καταργήσετε τη σύνδεση της συσκευής από τον λογαριασμό Microsoft, ώστε να μην επηρεάζει το όριο συσκευών του Microsoft Store: Πραγματοποιήστε είσοδο με τον λογαριασμό Microsoft στην τοποθεσία https://account.microsoft.com/devices . Βρείτε τη συσκευή που θέλετε να καταργήσετε και επιλέξτε Κατάργηση σύνδεσης . Ελέγξτε τις λεπτομέρειες της συσκευής σας και επιλέξτε Κατάργηση σύνδεσης . Σχετικά θέματα Αν λάβατε μια συσκευή που δεν έχει υποβληθεί σε επαναφορά, μπορείτε να κάνετε μια καθαρή εγκατάσταση . Αν έχετε χάσει το κλειδί Bitlocker, μεταβείτε στο θέμα Εύρεση του κλειδιού αποκατάστασης BitLocker . Για να μάθετε πώς θα μετονομάσετε μια συσκευή, μεταβείτε στο θέμα Διαχείριση των συσκευών σας για το Microsoft Store . Αν η συσκευή σας έχει χαθεί ή κλαπεί, μπορείτε να την εντοπίσετε και να την κλειδώσετε απομακρυσμένα. Μεταβείτε στο θέμα Εύρεση και κλείδωμα χαμένης συσκευής Windows . Για μια κλεμμένη κονσόλα Xbox, μεταβείτε στο θέμα Μάθετε τι πρέπει να κάνετε, αν κλαπεί η κονσόλα Xbox . Αν ανησυχείτε για την ασφάλεια του λογαριασμού σας Microsoft, μεταβείτε στο θέμα Πώς θα βοηθήσετε να παραμένει ασφαλής και προστατευμένος ο λογαριασμός Microsoft . Για να προβάλετε και να διαχειριστείτε όλες τις συσκευές που έχουν καταχωρηθεί στο λογαριασμό σας Microsoft, μεταβείτε στο θέμα Διαχείριση συσκευών που χρησιμοποιούνται με τον λογαριασμό Microsoft. Για περισσότερες πληροφορίες σχετικά με τις μεθόδους αποκατάστασης του υπολογιστή Windows, ανατρέξτε στο θέμα Επιλογές αποκατάστασης στα Windows. Χρειάζεστε περισσότερη βοήθεια; Επικοινωνία με την υποστήριξη Για τεχνική υποστήριξη, μεταβείτε στην Επικοινωνία με την Υποστήριξη της Microsoft , εισαγάγετε το πρόβλημά σας και επιλέξτε Λήψη βοήθειας . Εάν εξακολουθείτε να χρειάζεστε βοήθεια, επιλέξτε Επικοινωνία με την υποστήριξη για να μεταβείτε στην καλύτερη επιλογή υποστήριξης. ΕΓΓΡΑΦΗ ΣΤΙΣ ΤΡΟΦΟΔΟΣΙΕΣ RSS Χρειάζεστε περισσότερη βοήθεια; Θέλετε περισσότερες επιλογές; Ανακαλύψτε Κοινότητα Επικοινωνήστε μαζί μας Εξερευνήστε τα πλεονεκτήματα της συνδρομής, περιηγηθείτε σε εκπαιδευτικά σεμινάρια, μάθετε πώς μπορείτε να προστατεύσετε τη συσκευή σας και πολλά άλλα. Πλεονεκτήματα συνδρομής Microsoft 365 Εκπαίδευση Microsoft 365 Ασφάλεια της Microsoft Κέντρο προσβασιμότητας Οι κοινότητες σάς βοηθούν να κάνετε και να απαντάτε σε ερωτήσεις, να δίνετε σχόλια και να ακούτε από ειδικούς με πλούσια γνώση. Ρωτήστε την κοινότητα της Microsoft Τεχνική Κοινότητα Microsoft Windows Insiders Microsoft 365 Insiders Βρείτε λύσεις σε συνηθισμένα προβλήματα ή λάβετε βοήθεια από έναν συνεργάτη υποστήριξης. Ηλεκτρονική υποστήριξη Σας βοήθησαν αυτές οι πληροφορίες; Ναι Όχι Ευχαριστούμε! Έχετε άλλα σχόλια για τη Microsoft; Μπορείτε να μας βοηθήσετε να βελτιωθούμε; (Στείλτε σχόλια στη Microsoft, ώστε να μπορέσουμε να βοηθήσουμε.) Πόσο ικανοποιημένοι είστε με τη γλωσσική ποιότητα; Τι επηρέασε την εμπειρία σας; Το ζήτημά μου επιλύθηκε Απαλοιφή οδηγιών Ευνόητο Χωρίς τεχνική ορολογία Οι εικόνες βοήθησαν Ποιότητα μετάφρασης Δεν συμφωνούσε με την οθόνη μου Εσφαλμένες οδηγίες Πολύ τεχνικό Ανεπαρκείς πληροφορίες Δεν υπάρχουν αρκετές εικόνες Ποιότητα μετάφρασης Έχετε πρόσθετα σχόλια; (Προαιρετικό) Υποβολή σχολίων Πατώντας "Υποβολή" τα σχόλια σας θα χρησιμοποιηθούν για τη βελτίωση των προϊόντων και των υπηρεσιών της Microsoft. Ο διαχειριστής IT θα έχει τη δυνατότητα να συλλέξει αυτά τα δεδομένα. Δήλωση προστασίας προσωπικών δεδομένων. Σας ευχαριστούμε για τα σχόλιά σας! × Τι νέο υπάρχει Copilot για οργανισμούς Copilot για προσωπική χρήση Microsoft 365 Εφαρμογές των Windows 11 Microsoft Store Προφίλ λογαριασμού Κέντρο λήψης Επιστροφές Παρακολούθηση παραγγελίας Ανακύκλωση Commercial Warranties Εκπαίδευση Microsoft για εκπαιδευτικά ιδρύματα Συσκευές για εκπαιδευτικά ιδρύματα Microsoft Teams για εκπαιδευτικά ιδρύματα Microsoft 365 για εκπαιδευτικά ιδρύματα Office για εκπαιδευτικά ιδρύματα Εκπαίδευση και ανάπτυξη εκπαιδευτικών Προσφορές για σπουδαστές και γονείς Azure για σπουδαστές Επιχειρήσεις Ασφάλεια Microsoft Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teams Προγραμματιστής και IT Πρόγραμμα για προγραμματιστές της Microsoft Microsoft Learn Υποστήριξη για εφαρμογές αγοράς AI Τεχνολογική κοινότητα της Microsoft Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Εταιρεία Σταδιοδρομίες Εταιρικά νέα Προστασία προσωπικών δεδομένων στη Microsoft Επενδυτές Βιωσιμότητα Ελληνικά (Ελλάδα) Εικονίδιο εξαίρεσης σχετικά με τις επιλογές προστασίας προσωπικών δεδομένων σας Οι επιλογές προστασίας προσωπικών δεδομένων σας Εικονίδιο εξαίρεσης σχετικά με τις επιλογές προστασίας προσωπικών δεδομένων σας Οι επιλογές προστασίας προσωπικών δεδομένων σας Προστασία προσωπικών δεδομένων για την υγεία των καταναλωτών Επικοινωνήστε με τη Microsoft Προστασία δεδομένων Διαχείριση cookies Όροι χρήσης Εμπορικά σήματα Σχετικά με τις διαφημίσεις μας EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
http://anh.cs.luc.edu/handsonPythonTutorial/ifstatements.html#gradeex | 3.1. If Statements — Hands-on Python Tutorial for Python 3 Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » 3.1. If Statements ¶ 3.1.1. Simple Conditions ¶ The statements introduced in this chapter will involve tests or conditions . More syntax for conditions will be introduced later, but for now consider simple arithmetic comparisons that directly translate from math into Python. Try each line separately in the Shell 2 < 5 3 > 7 x = 11 x > 10 2 * x < x type ( True ) You see that conditions are either True or False . These are the only possible Boolean values (named after 19th century mathematician George Boole). In Python the name Boolean is shortened to the type bool . It is the type of the results of true-false conditions or tests. Note The Boolean values True and False have no quotes around them! Just as '123' is a string and 123 without the quotes is not, 'True' is a string, not of type bool. 3.1.2. Simple if Statements ¶ Run this example program, suitcase.py. Try it at least twice, with inputs: 30 and then 55. As you an see, you get an extra result, depending on the input. The main code is: weight = float ( input ( "How many pounds does your suitcase weigh? " )) if weight > 50 : print ( "There is a $25 charge for luggage that heavy." ) print ( "Thank you for your business." ) The middle two line are an if statement. It reads pretty much like English. If it is true that the weight is greater than 50, then print the statement about an extra charge. If it is not true that the weight is greater than 50, then don’t do the indented part: skip printing the extra luggage charge. In any event, when you have finished with the if statement (whether it actually does anything or not), go on to the next statement that is not indented under the if . In this case that is the statement printing “Thank you”. The general Python syntax for a simple if statement is if condition : indentedStatementBlock If the condition is true, then do the indented statements. If the condition is not true, then skip the indented statements. Another fragment as an example: if balance < 0 : transfer = - balance # transfer enough from the backup account: backupAccount = backupAccount - transfer balance = balance + transfer As with other kinds of statements with a heading and an indented block, the block can have more than one statement. The assumption in the example above is that if an account goes negative, it is brought back to 0 by transferring money from a backup account in several steps. In the examples above the choice is between doing something (if the condition is True ) or nothing (if the condition is False ). Often there is a choice of two possibilities, only one of which will be done, depending on the truth of a condition. 3.1.3. if - else Statements ¶ Run the example program, clothes.py . Try it at least twice, with inputs 50 and then 80. As you can see, you get different results, depending on the input. The main code of clothes.py is: temperature = float ( input ( 'What is the temperature? ' )) if temperature > 70 : print ( 'Wear shorts.' ) else : print ( 'Wear long pants.' ) print ( 'Get some exercise outside.' ) The middle four lines are an if-else statement. Again it is close to English, though you might say “otherwise” instead of “else” (but else is shorter!). There are two indented blocks: One, like in the simple if statement, comes right after the if heading and is executed when the condition in the if heading is true. In the if - else form this is followed by an else: line, followed by another indented block that is only executed when the original condition is false . In an if - else statement exactly one of two possible indented blocks is executed. A line is also shown de dented next, removing indentation, about getting exercise. Since it is dedented, it is not a part of the if-else statement: Since its amount of indentation matches the if heading, it is always executed in the normal forward flow of statements, after the if - else statement (whichever block is selected). The general Python if - else syntax is if condition : indentedStatementBlockForTrueCondition else: indentedStatementBlockForFalseCondition These statement blocks can have any number of statements, and can include about any kind of statement. See Graduate Exercise 3.1.4. More Conditional Expressions ¶ All the usual arithmetic comparisons may be made, but many do not use standard mathematical symbolism, mostly for lack of proper keys on a standard keyboard. Meaning Math Symbol Python Symbols Less than < < Greater than > > Less than or equal ≤ <= Greater than or equal ≥ >= Equals = == Not equal ≠ != There should not be space between the two-symbol Python substitutes. Notice that the obvious choice for equals , a single equal sign, is not used to check for equality. An annoying second equal sign is required. This is because the single equal sign is already used for assignment in Python, so it is not available for tests. Warning It is a common error to use only one equal sign when you mean to test for equality, and not make an assignment! Tests for equality do not make an assignment, and they do not require a variable on the left. Any expressions can be tested for equality or inequality ( != ). They do not need to be numbers! Predict the results and try each line in the Shell : x = 5 x x == 5 x == 6 x x != 6 x = 6 6 == x 6 != x 'hi' == 'h' + 'i' 'HI' != 'hi' [ 1 , 2 ] != [ 2 , 1 ] An equality check does not make an assignment. Strings are case sensitive. Order matters in a list. Try in the Shell : 'a' > 5 When the comparison does not make sense, an Exception is caused. [1] Following up on the discussion of the inexactness of float arithmetic in String Formats for Float Precision , confirm that Python does not consider .1 + .2 to be equal to .3: Write a simple condition into the Shell to test. Here is another example: Pay with Overtime. Given a person’s work hours for the week and regular hourly wage, calculate the total pay for the week, taking into account overtime. Hours worked over 40 are overtime, paid at 1.5 times the normal rate. This is a natural place for a function enclosing the calculation. Read the setup for the function: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' The problem clearly indicates two cases: when no more than 40 hours are worked or when more than 40 hours are worked. In case more than 40 hours are worked, it is convenient to introduce a variable overtimeHours. You are encouraged to think about a solution before going on and examining mine. You can try running my complete example program, wages.py, also shown below. The format operation at the end of the main function uses the floating point format ( String Formats for Float Precision ) to show two decimal places for the cents in the answer: def calcWeeklyWages ( totalHours , hourlyWage ): '''Return the total weekly wages for a worker working totalHours, with a given regular hourlyWage. Include overtime for hours over 40. ''' if totalHours <= 40 : totalWages = hourlyWage * totalHours else : overtime = totalHours - 40 totalWages = hourlyWage * 40 + ( 1.5 * hourlyWage ) * overtime return totalWages def main (): hours = float ( input ( 'Enter hours worked: ' )) wage = float ( input ( 'Enter dollars paid per hour: ' )) total = calcWeeklyWages ( hours , wage ) print ( 'Wages for {hours} hours at ${wage:.2f} per hour are ${total:.2f}.' . format ( ** locals ())) main () Here the input was intended to be numeric, but it could be decimal so the conversion from string was via float , not int . Below is an equivalent alternative version of the body of calcWeeklyWages , used in wages1.py . It uses just one general calculation formula and sets the parameters for the formula in the if statement. There are generally a number of ways you might solve the same problem! if totalHours <= 40 : regularHours = totalHours overtime = 0 else : overtime = totalHours - 40 regularHours = 40 return hourlyWage * regularHours + ( 1.5 * hourlyWage ) * overtime The in boolean operator : There are also Boolean operators that are applied to types others than numbers. A useful Boolean operator is in , checking membership in a sequence: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' in vals True >>> 'was' in vals False It can also be used with not , as not in , to mean the opposite: >>> vals = [ 'this' , 'is' , 'it] >>> 'is' not in vals False >>> 'was' not in vals True In general the two versions are: item in sequence item not in sequence Detecting the need for if statements : Like with planning programs needing``for`` statements, you want to be able to translate English descriptions of problems that would naturally include if or if - else statements. What are some words or phrases or ideas that suggest the use of these statements? Think of your own and then compare to a few I gave: [2] 3.1.4.1. Graduate Exercise ¶ Write a program, graduate.py , that prompts students for how many credits they have. Print whether of not they have enough credits for graduation. (At Loyola University Chicago 120 credits are needed for graduation.) 3.1.4.2. Head or Tails Exercise ¶ Write a program headstails.py . It should include a function flip() , that simulates a single flip of a coin: It randomly prints either Heads or Tails . Accomplish this by choosing 0 or 1 arbitrarily with random.randrange(2) , and use an if - else statement to print Heads when the result is 0, and Tails otherwise. In your main program have a simple repeat loop that calls flip() 10 times to test it, so you generate a random sequence of 10 Heads and Tails . 3.1.4.3. Strange Function Exercise ¶ Save the example program jumpFuncStub.py as jumpFunc.py , and complete the definitions of functions jump and main as described in the function documentation strings in the program. In the jump function definition use an if - else statement (hint [3] ). In the main function definition use a for -each loop, the range function, and the jump function. The jump function is introduced for use in Strange Sequence Exercise , and others after that. 3.1.5. Multiple Tests and if - elif Statements ¶ Often you want to distinguish between more than two distinct cases, but conditions only have two possible results, True or False , so the only direct choice is between two options. As anyone who has played “20 Questions” knows, you can distinguish more cases by further questions. If there are more than two choices, a single test may only reduce the possibilities, but further tests can reduce the possibilities further and further. Since most any kind of statement can be placed in an indented statement block, one choice is a further if statement. For instance consider a function to convert a numerical grade to a letter grade, ‘A’, ‘B’, ‘C’, ‘D’ or ‘F’, where the cutoffs for ‘A’, ‘B’, ‘C’, and ‘D’ are 90, 80, 70, and 60 respectively. One way to write the function would be test for one grade at a time, and resolve all the remaining possibilities inside the next else clause: def letterGrade ( score ): if score >= 90 : letter = 'A' else : # grade must be B, C, D or F if score >= 80 : letter = 'B' else : # grade must be C, D or F if score >= 70 : letter = 'C' else : # grade must D or F if score >= 60 : letter = 'D' else : letter = 'F' return letter This repeatedly increasing indentation with an if statement as the else block can be annoying and distracting. A preferred alternative in this situation, that avoids all this indentation, is to combine each else and if block into an elif block: def letterGrade ( score ): if score >= 90 : letter = 'A' elif score >= 80 : letter = 'B' elif score >= 70 : letter = 'C' elif score >= 60 : letter = 'D' else : letter = 'F' return letter The most elaborate syntax for an if - elif - else statement is indicated in general below: if condition1 : indentedStatementBlockForTrueCondition1 elif condition2 : indentedStatementBlockForFirstTrueCondition2 elif condition3 : indentedStatementBlockForFirstTrueCondition3 elif condition4 : indentedStatementBlockForFirstTrueCondition4 else: indentedStatementBlockForEachConditionFalse The if , each elif , and the final else lines are all aligned. There can be any number of elif lines, each followed by an indented block. (Three happen to be illustrated above.) With this construction exactly one of the indented blocks is executed. It is the one corresponding to the first True condition, or, if all conditions are False , it is the block after the final else line. Be careful of the strange Python contraction. It is elif , not elseif . A program testing the letterGrade function is in example program grade1.py . See Grade Exercise . A final alternative for if statements: if - elif -.... with no else . This would mean changing the syntax for if - elif - else above so the final else: and the block after it would be omitted. It is similar to the basic if statement without an else , in that it is possible for no indented block to be executed. This happens if none of the conditions in the tests are true. With an else included, exactly one of the indented blocks is executed. Without an else , at most one of the indented blocks is executed. if weight > 120 : print ( 'Sorry, we can not take a suitcase that heavy.' ) elif weight > 50 : print ( 'There is a $25 charge for luggage that heavy.' ) This if - elif statement only prints a line if there is a problem with the weight of the suitcase. 3.1.5.1. Sign Exercise ¶ Write a program sign.py to ask the user for a number. Print out which category the number is in: 'positive' , 'negative' , or 'zero' . 3.1.5.2. Grade Exercise ¶ In Idle, load grade1.py and save it as grade2.py Modify grade2.py so it has an equivalent version of the letterGrade function that tests in the opposite order, first for F, then D, C, .... Hint: How many tests do you need to do? [4] Be sure to run your new version and test with different inputs that test all the different paths through the program. Be careful to test around cut-off points. What does a grade of 79.6 imply? What about exactly 80? 3.1.5.3. Wages Exercise ¶ * Modify the wages.py or the wages1.py example to create a program wages2.py that assumes people are paid double time for hours over 60. Hence they get paid for at most 20 hours overtime at 1.5 times the normal rate. For example, a person working 65 hours with a regular wage of $10 per hour would work at $10 per hour for 40 hours, at 1.5 * $10 for 20 hours of overtime, and 2 * $10 for 5 hours of double time, for a total of 10*40 + 1.5*10*20 + 2*10*5 = $800. You may find wages1.py easier to adapt than wages.py . Be sure to test all paths through the program! Your program is likely to be a modification of a program where some choices worked before, but once you change things, retest for all the cases! Changes can mess up things that worked before. 3.1.6. Nesting Control-Flow Statements ¶ The power of a language like Python comes largely from the variety of ways basic statements can be combined . In particular, for and if statements can be nested inside each other’s indented blocks. For example, suppose you want to print only the positive numbers from an arbitrary list of numbers in a function with the following heading. Read the pieces for now. def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' For example, suppose numberList is [3, -5, 2, -1, 0, 7] . You want to process a list, so that suggests a for -each loop, for num in numberList : but a for -each loop runs the same code body for each element of the list, and we only want print ( num ) for some of them. That seems like a major obstacle, but think closer at what needs to happen concretely. As a human, who has eyes of amazing capacity, you are drawn immediately to the actual correct numbers, 3, 2, and 7, but clearly a computer doing this systematically will have to check every number. In fact, there is a consistent action required: Every number must be tested to see if it should be printed. This suggests an if statement, with the condition num > 0 . Try loading into Idle and running the example program onlyPositive.py , whose code is shown below. It ends with a line testing the function: def printAllPositive ( numberList ): '''Print only the positive numbers in numberList.''' for num in numberList : if num > 0 : print ( num ) printAllPositive ([ 3 , - 5 , 2 , - 1 , 0 , 7 ]) This idea of nesting if statements enormously expands the possibilities with loops. Now different things can be done at different times in loops, as long as there is a consistent test to allow a choice between the alternatives. Shortly, while loops will also be introduced, and you will see if statements nested inside of them, too. The rest of this section deals with graphical examples. Run example program bounce1.py . It has a red ball moving and bouncing obliquely off the edges. If you watch several times, you should see that it starts from random locations. Also you can repeat the program from the Shell prompt after you have run the script. For instance, right after running the program, try in the Shell bounceBall ( - 3 , 1 ) The parameters give the amount the shape moves in each animation step. You can try other values in the Shell , preferably with magnitudes less than 10. For the remainder of the description of this example, read the extracted text pieces. The animations before this were totally scripted, saying exactly how many moves in which direction, but in this case the direction of motion changes with every bounce. The program has a graphic object shape and the central animation step is shape . move ( dx , dy ) but in this case, dx and dy have to change when the ball gets to a boundary. For instance, imagine the ball getting to the left side as it is moving to the left and up. The bounce obviously alters the horizontal part of the motion, in fact reversing it, but the ball would still continue up. The reversal of the horizontal part of the motion means that the horizontal shift changes direction and therefore its sign: dx = - dx but dy does not need to change. This switch does not happen at each animation step, but only when the ball reaches the edge of the window. It happens only some of the time - suggesting an if statement. Still the condition must be determined. Suppose the center of the ball has coordinates (x, y). When x reaches some particular x coordinate, call it xLow, the ball should bounce. The edge of the window is at coordinate 0, but xLow should not be 0, or the ball would be half way off the screen before bouncing! For the edge of the ball to hit the edge of the screen, the x coordinate of the center must be the length of the radius away, so actually xLow is the radius of the ball. Animation goes quickly in small steps, so I cheat. I allow the ball to take one (small, quick) step past where it really should go ( xLow ), and then we reverse it so it comes back to where it belongs. In particular if x < xLow : dx = - dx There are similar bounding variables xHigh , yLow and yHigh , all the radius away from the actual edge coordinates, and similar conditions to test for a bounce off each possible edge. Note that whichever edge is hit, one coordinate, either dx or dy, reverses. One way the collection of tests could be written is if x < xLow : dx = - dx if x > xHigh : dx = - dx if y < yLow : dy = - dy if y > yHigh : dy = - dy This approach would cause there to be some extra testing: If it is true that x < xLow , then it is impossible for it to be true that x > xHigh , so we do not need both tests together. We avoid unnecessary tests with an elif clause (for both x and y): if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy Note that the middle if is not changed to an elif , because it is possible for the ball to reach a corner , and need both dx and dy reversed. The program also uses several methods to read part of the state of graphics objects that we have not used in examples yet. Various graphics objects, like the circle we are using as the shape, know their center point, and it can be accessed with the getCenter() method. (Actually a clone of the point is returned.) Also each coordinate of a Point can be accessed with the getX() and getY() methods. This explains the new features in the central function defined for bouncing around in a box, bounceInBox . The animation arbitrarily goes on in a simple repeat loop for 600 steps. (A later example will improve this behavior.) def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) The program starts the ball from an arbitrary point inside the allowable rectangular bounds. This is encapsulated in a utility function included in the program, getRandomPoint . The getRandomPoint function uses the randrange function from the module random . Note that in parameters for both the functions range and randrange , the end stated is past the last value actually desired: def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) The full program is listed below, repeating bounceInBox and getRandomPoint for completeness. Several parts that may be useful later, or are easiest to follow as a unit, are separated out as functions. Make sure you see how it all hangs together or ask questions! ''' Show a ball bouncing off the sides of the window. ''' from graphics import * import time , random def bounceInBox ( shape , dx , dy , xLow , xHigh , yLow , yHigh ): ''' Animate a shape moving in jumps (dx, dy), bouncing when its center reaches the low and high x and y coordinates. ''' delay = . 005 for i in range ( 600 ): shape . move ( dx , dy ) center = shape . getCenter () x = center . getX () y = center . getY () if x < xLow : dx = - dx elif x > xHigh : dx = - dx if y < yLow : dy = - dy elif y > yHigh : dy = - dy time . sleep ( delay ) def getRandomPoint ( xLow , xHigh , yLow , yHigh ): '''Return a random Point with coordinates in the range specified.''' x = random . randrange ( xLow , xHigh + 1 ) y = random . randrange ( yLow , yHigh + 1 ) return Point ( x , y ) def makeDisk ( center , radius , win ): '''return a red disk that is drawn in win with given center and radius.''' disk = Circle ( center , radius ) disk . setOutline ( "red" ) disk . setFill ( "red" ) disk . draw ( win ) return disk def bounceBall ( dx , dy ): '''Make a ball bounce around the screen, initially moving by (dx, dy) at each jump.''' win = GraphWin ( 'Ball Bounce' , 290 , 290 ) win . yUp () radius = 10 xLow = radius # center is separated from the wall by the radius at a bounce xHigh = win . getWidth () - radius yLow = radius yHigh = win . getHeight () - radius center = getRandomPoint ( xLow , xHigh , yLow , yHigh ) ball = makeDisk ( center , radius , win ) bounceInBox ( ball , dx , dy , xLow , xHigh , yLow , yHigh ) win . close () bounceBall ( 3 , 5 ) 3.1.6.1. Short String Exercise ¶ Write a program short.py with a function printShort with heading: def printShort ( strings ): '''Given a list of strings, print the ones with at most three characters. >>> printShort(['a', 'long', one']) a one ''' In your main program, test the function, calling it several times with different lists of strings. Hint: Find the length of each string with the len function. The function documentation here models a common approach: illustrating the behavior of the function with a Python Shell interaction. This part begins with a line starting with >>> . Other exercises and examples will also document behavior in the Shell. 3.1.6.2. Even Print Exercise ¶ Write a program even1.py with a function printEven with heading: def printEven ( nums ): '''Given a list of integers nums, print the even ones. >>> printEven([4, 1, 3, 2, 7]) 4 2 ''' In your main program, test the function, calling it several times with different lists of integers. Hint: A number is even if its remainder, when dividing by 2, is 0. 3.1.6.3. Even List Exercise ¶ Write a program even2.py with a function chooseEven with heading: def chooseEven ( nums ): '''Given a list of integers, nums, return a list containing only the even ones. >>> chooseEven([4, 1, 3, 2, 7]) [4, 2] ''' In your main program, test the function, calling it several times with different lists of integers and printing the results in the main program. (The documentation string illustrates the function call in the Python shell, where the return value is automatically printed. Remember, that in a program, you only print what you explicitly say to print.) Hint: In the function, create a new list, and append the appropriate numbers to it, before returning the result. 3.1.6.4. Unique List Exercise ¶ * The madlib2.py program has its getKeys function, which first generates a list of each occurrence of a cue in the story format. This gives the cues in order, but likely includes repetitions. The original version of getKeys uses a quick method to remove duplicates, forming a set from the list. There is a disadvantage in the conversion, though: Sets are not ordered, so when you iterate through the resulting set, the order of the cues will likely bear no resemblance to the order they first appeared in the list. That issue motivates this problem: Copy madlib2.py to madlib2a.py , and add a function with this heading: def uniqueList ( aList ): ''' Return a new list that includes the first occurrence of each value in aList, and omits later repeats. The returned list should include the first occurrences of values in aList in their original order. >>> vals = ['cat', 'dog', 'cat', 'bug', 'dog', 'ant', 'dog', 'bug'] >>> uniqueList(vals) ['cat', 'dog', 'bug', 'ant'] ''' Hint: Process aList in order. Use the in syntax to only append elements to a new list that are not already in the new list. After perfecting the uniqueList function, replace the last line of getKeys , so it uses uniqueList to remove duplicates in keyList . Check that your madlib2a.py prompts you for cue values in the order that the cues first appear in the madlib format string. 3.1.7. Compound Boolean Expressions ¶ To be eligible to graduate from Loyola University Chicago, you must have 120 credits and a GPA of at least 2.0. This translates directly into Python as a compound condition : credits >= 120 and GPA >= 2.0 This is true if both credits >= 120 is true and GPA >= 2.0 is true. A short example program using this would be: credits = float ( input ( 'How many units of credit do you have? ' )) GPA = float ( input ( 'What is your GPA? ' )) if credits >= 120 and GPA >= 2.0 : print ( 'You are eligible to graduate!' ) else : print ( 'You are not eligible to graduate.' ) The new Python syntax is for the operator and : condition1 and condition2 The compound condition is true if both of the component conditions are true. It is false if at least one of the conditions is false. See Congress Exercise . In the last example in the previous section, there was an if - elif statement where both tests had the same block to be done if the condition was true: if x < xLow : dx = - dx elif x > xHigh : dx = - dx There is a simpler way to state this in a sentence: If x < xLow or x > xHigh, switch the sign of dx. That translates directly into Python: if x < xLow or x > xHigh : dx = - dx The word or makes another compound condition: condition1 or condition2 is true if at least one of the conditions is true. It is false if both conditions are false. This corresponds to one way the word “or” is used in English. Other times in English “or” is used to mean exactly one alternative is true. Warning When translating a problem stated in English using “or”, be careful to determine whether the meaning matches Python’s or . It is often convenient to encapsulate complicated tests inside a function. Think how to complete the function starting: def isInside ( rect , point ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () Recall that a Rectangle is specified in its constructor by two diagonally oppose Point s. This example gives the first use in the tutorials of the Rectangle methods that recover those two corner points, getP1 and getP2 . The program calls the points obtained this way pt1 and pt2 . The x and y coordinates of pt1 , pt2 , and point can be recovered with the methods of the Point type, getX() and getY() . Suppose that I introduce variables for the x coordinates of pt1 , point , and pt2 , calling these x-coordinates end1 , val , and end2 , respectively. On first try you might decide that the needed mathematical relationship to test is end1 <= val <= end2 Unfortunately, this is not enough: The only requirement for the two corner points is that they be diagonally opposite, not that the coordinates of the second point are higher than the corresponding coordinates of the first point. It could be that end1 is 200; end2 is 100, and val is 120. In this latter case val is between end1 and end2 , but substituting into the expression above 200 <= 120 <= 100 is False. The 100 and 200 need to be reversed in this case. This makes a complicated situation. Also this is an issue which must be revisited for both the x and y coordinates. I introduce an auxiliary function isBetween to deal with one coordinate at a time. It starts: def isBetween ( val , end1 , end2 ): '''Return True if val is between the ends. The ends do not need to be in increasing order.''' Clearly this is true if the original expression, end1 <= val <= end2 , is true. You must also consider the possible case when the order of the ends is reversed: end2 <= val <= end1 . How do we combine these two possibilities? The Boolean connectives to consider are and and or . Which applies? You only need one to be true, so or is the proper connective: A correct but redundant function body would be: if end1 <= val <= end2 or end2 <= val <= end1 : return True else : return False Check the meaning: if the compound expression is True , return True . If the condition is False , return False – in either case return the same value as the test condition. See that a much simpler and neater version is to just return the value of the condition itself! return end1 <= val <= end2 or end2 <= val <= end1 Note In general you should not need an if - else statement to choose between true and false values! Operate directly on the boolean expression. A side comment on expressions like end1 <= val <= end2 Other than the two-character operators, this is like standard math syntax, chaining comparisons. In Python any number of comparisons can be chained in this way, closely approximating mathematical notation. Though this is good Python, be aware that if you try other high-level languages like Java and C++, such an expression is gibberish. Another way the expression can be expressed (and which translates directly to other languages) is: end1 <= val and val <= end2 So much for the auxiliary function isBetween . Back to the isInside function. You can use the isBetween function to check the x coordinates, isBetween ( point . getX (), p1 . getX (), p2 . getX ()) and to check the y coordinates, isBetween ( point . getY (), p1 . getY (), p2 . getY ()) Again the question arises: how do you combine the two tests? In this case we need the point to be both between the sides and between the top and bottom, so the proper connector is and . Think how to finish the isInside method. Hint: [5] Sometimes you want to test the opposite of a condition. As in English you can use the word not . For instance, to test if a Point was not inside Rectangle Rect, you could use the condition not isInside ( rect , point ) In general, not condition is True when condition is False , and False when condition is True . The example program chooseButton1.py , shown below, is a complete program using the isInside function in a simple application, choosing colors. Pardon the length. Do check it out. It will be the starting point for a number of improvements that shorten it and make it more powerful in the next section. First a brief overview: The program includes the functions isBetween and isInside that have already been discussed. The program creates a number of colored rectangles to use as buttons and also as picture components. Aside from specific data values, the code to create each rectangle is the same, so the action is encapsulated in a function, makeColoredRect . All of this is fine, and will be preserved in later versions. The present main function is long, though. It has the usual graphics starting code, draws buttons and picture elements, and then has a number of code sections prompting the user to choose a color for a picture element. Each code section has a long if - elif - else test to see which button was clicked, and sets the color of the picture element appropriately. '''Make a choice of colors via mouse clicks in Rectangles -- A demonstration of Boolean operators and Boolean functions.''' from graphics import * def isBetween ( x , end1 , end2 ): '''Return True if x is between the ends or equal to either. The ends do not need to be in increasing order.''' return end1 <= x <= end2 or end2 <= x <= end1 def isInside ( point , rect ): '''Return True if the point is inside the Rectangle rect.''' pt1 = rect . getP1 () pt2 = rect . getP2 () return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) def makeColoredRect ( corner , width , height , color , win ): ''' Return a Rectangle drawn in win with the upper left corner and color specified.''' corner2 = corner . clone () corner2 . move ( width , - height ) rect = Rectangle ( corner , corner2 ) rect . setFill ( color ) rect . draw ( win ) return rect def main (): win = GraphWin ( 'pick Colors' , 400 , 400 ) win . yUp () # right side up coordinates redButton = makeColoredRect ( Point ( 310 , 350 ), 80 , 30 , 'red' , win ) yellowButton = makeColoredRect ( Point ( 310 , 310 ), 80 , 30 , 'yellow' , win ) blueButton = makeColoredRect ( Point ( 310 , 270 ), 80 , 30 , 'blue' , win ) house = makeColoredRect ( Point ( 60 , 200 ), 180 , 150 , 'gray' , win ) door = makeColoredRect ( Point ( 90 , 150 ), 40 , 100 , 'white' , win ) roof = Polygon ( Point ( 50 , 200 ), Point ( 250 , 200 ), Point ( 150 , 300 )) roof . setFill ( 'black' ) roof . draw ( win ) msg = Text ( Point ( win . getWidth () / 2 , 375 ), 'Click to choose a house color.' ) msg . draw ( win ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' house . setFill ( color ) msg . setText ( 'Click to choose a door color.' ) pt = win . getMouse () if isInside ( pt , redButton ): color = 'red' elif isInside ( pt , yellowButton ): color = 'yellow' elif isInside ( pt , blueButton ): color = 'blue' else : color = 'white' door . setFill ( color ) win . promptClose ( msg ) main () The only further new feature used is in the long return statement in isInside . return isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and \ isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) Recall that Python is smart enough to realize that a statement continues to the next line if there is an unmatched pair of parentheses or brackets. Above is another situation with a long statement, but there are no unmatched parentheses on a line. For readability it is best not to make an enormous long line that would run off your screen or paper. Continuing to the next line is recommended. You can make the final character on a line be a backslash ( '\\' ) to indicate the statement continues on the next line. This is not particularly neat, but it is a rather rare situation. Most statements fit neatly on one line, and the creator of Python decided it was best to make the syntax simple in the most common situation. (Many other languages require a special statement terminator symbol like ‘;’ and pay no attention to newlines). Extra parentheses here would not hurt, so an alternative would be return ( isBetween ( point . getX (), pt1 . getX (), pt2 . getX ()) and isBetween ( point . getY (), pt1 . getY (), pt2 . getY ()) ) The chooseButton1.py program is long partly because of repeated code. The next section gives another version involving lists. 3.1.7.1. Congress Exercise ¶ A person is eligible to be a US Senator who is at least 30 years old and has been a US citizen for at least 9 years. Write an initial version of a program congress.py to obtain age and length of citizenship from the user and print out if a person is eligible to be a Senator or not. A person is eligible to be a US Representative who is at least 25 years old and has been a US citizen for at least 7 years. Elaborate your program congress.py so it obtains age and length of citizenship and prints out just the one of the following three statements that is accurate: You are eligible for both the House and Senate. You eligible only for the House. You are ineligible for Congress. 3.1.8. More String Methods ¶ Here are a few more string methods useful in the next exercises, assuming the methods are applied to a string s : s .startswith( pre ) returns True if string s starts with string pre : Both '-123'.startswith('-') and 'downstairs'.startswith('down') are True , but '1 - 2 - 3'.startswith('-') is False . s .endswith( suffix ) returns True if string s ends with string suffix : Both 'whoever'.endswith('ever') and 'downstairs'.endswith('airs') are True , but '1 - 2 - 3'.endswith('-') is False . s .replace( sub , replacement , count ) returns a new string with up to the first count occurrences of string sub replaced by replacement . The replacement can be the empty string to delete sub . For example: s = '-123' t = s . replace ( '-' , '' , 1 ) # t equals '123' t = t . replace ( '-' , '' , 1 ) # t is still equal to '123' u = '.2.3.4.' v = u . replace ( '.' , '' , 2 ) # v equals '23.4.' w = u . replace ( '.' , ' dot ' , 5 ) # w equals '2 dot 3 dot 4 dot ' 3.1.8.1. Article Start Exercise ¶ In library alphabetizing, if the initial word is an article (“The”, “A”, “An”), then it is ignored when ordering entries. Write a program completing this function, and then testing it: def startsWithArticle ( title ): '''Return True if the first word of title is "The", "A" or "An".''' Be careful, if the title starts with “There”, it does not start with an article. What should you be testing for? 3.1.8.2. Is Number String Exercise ¶ ** In the later Safe Number Input Exercise , it will be important to know if a string can be converted to the desired type of number. Explore that here. Save example isNumberStringStub.py as isNumberString.py and complete it. It contains headings and documentation strings for the functions in both parts of this exercise. A legal whole number string consists entirely of digits. Luckily strings have an isdigit method, which is true when a nonempty string consists entirely of digits, so '2397'.isdigit() returns True , and '23a'.isdigit() returns False , exactly corresponding to the situations when the string represents a whole number! In both parts be sure to test carefully. Not only confirm that all appropriate strings return True . Also be sure to test that you return False for all sorts of bad strings. Recognizing an integer string is more involved, since it can start with a minus sign (or not). Hence the isdigit method is not enough by itself. This part is the most straightforward if you have worked on the sections String Indices and String Slices . An alternate approach works if you use the count method from Object Orientation , and some methods from this section. Complete the function isIntStr . Complete the function isDecimalStr , which introduces the possibility of a decimal point (though a decimal point is not required). The string methods mentioned in the previous part remain useful. [1] This is an improvement that is new in Python 3. [2] “In this case do ___; otherwise”, “if ___, then”, “when ___ is true, then”, “___ depends on whether”, [3] If you divide an even number by 2, what is the remainder? Use this idea in your if condition. [4] 4 tests to distinguish the 5 cases, as in the previous version [5] Once again, you are calculating and returning a Boolean result. You do not need an if - else statement. Table Of Contents 3.1. If Statements 3.1.1. Simple Conditions 3.1.2. Simple if Statements 3.1.3. if - else Statements 3.1.4. More Conditional Expressions 3.1.4.1. Graduate Exercise 3.1.4.2. Head or Tails Exercise 3.1.4.3. Strange Function Exercise 3.1.5. Multiple Tests and if - elif Statements 3.1.5.1. Sign Exercise 3.1.5.2. Grade Exercise 3.1.5.3. Wages Exercise 3.1.6. Nesting Control-Flow Statements 3.1.6.1. Short String Exercise 3.1.6.2. Even Print Exercise 3.1.6.3. Even List Exercise 3.1.6.4. Unique List Exercise 3.1.7. Compound Boolean Expressions 3.1.7.1. Congress Exercise 3.1.8. More String Methods 3.1.8.1. Article Start Exercise 3.1.8.2. Is Number String Exercise Previous topic 3. More On Flow of Control Next topic 3.2. Loops and Tuples This Page Show Source Quick search Enter search terms or a module, class or function name. Navigation index next | previous | Hands-on Python Tutorial » 3. More On Flow of Control » © Copyright 2019, Dr. Andrew N. Harrington. Last updated on Jan 05, 2020. Created using Sphinx 1.3.1+. | 2026-01-13T09:30:35 |
http://www.videolan.org/vlc/releases/3.0.13.html | VLC 3.0.13 Vetinari - VideoLAN * { behavior: url("/style/box-sizing.htc"); } Toggle navigation VideoLAN Team & Organization Consulting Services & Partners Events Legal Press center Contact us VLC Download Features Customize Get Goodies Projects DVBlast x264 x262 x265 multicat dav1d VLC Skin Editor VLC media player libVLC libdvdcss libdvdnav libdvdread libbluray libdvbpsi libaacs libdvbcsa biTStream vlc-unity All Projects Contribute Getting started Donate Report a bug Support donate donate Donate donate donate VideoLAN, a project and a non-profit organization. VLC 3.0.13 Vetinari VLC 3.0.13 is the fourteenth version of the "Vetinari" branch of our popular media player. Hardware accelerated decoding for HD and UHD Supports HDR and HDR tone-mapping 360° video navigation Chromecast streaming Optimized for iPhone X Faster version for UWP and XBox One Get VLC now! Version 3.0 3.0.13 Fixes VLC 3.0.13 is the fourteenth update of "Vetinari": Fix artifacts in HLS streams Fix MP4 audio support regressions Add SSA text scaling support Add NFSv4 support Improve SMB2 integration Improve Direct3D11 rendering smoothness Add mousewheel horizontal axis control Multiple crash fixes And some security issues Read the Changelog . 3.0 Highlights VLC 3.0 "Vetinari" is a new major update of VLC VLC 3.0 activates hardware decoding by default, to get 4K and 8K playback! It supports 10bits and HDR VLC supports 360 video and 3D audio , up to Ambisonics 3rd order Allows audio passthrough for HD audio codecs Can stream to Chromecast devices, even in formats not supported natively Can play Blu-Ray Java menus : BD-J VLC supports browsing of local network drives and NAS Read the Changelog . VLC 3.0 playing 8K 48fps 360 video on Android Galaxy S8 from VideoLAN on Vimeo . VLC 3.0 playing 8k60 on Windows 10 using i7 GPU from VideoLAN on Vimeo . 3.0 Features Core Network browsing for distant filesystems (SMB, FTP, SFTP, NFS...) HDMI passthrough for Audio HD codecs, like E-AC3, TrueHD or DTS-HD 12bits codec and extended colorspaces (HDR) Stream to distant renderers, like Chromecast 360 video and 3D audio playback with viewpoint change Support for Ambisonics audio and more than 8 audio channels Subtitles size modification during playback Secure passwords storage Acceleration Hardware decoding and display on all platforms HEVC hardware decoding on Windows, using DxVA2 and D3D11 HEVC hardware decoding using OMX and MediaCodec (Android) MPEG-2, VC1/WMV3 hardware decoding on Android Important improvements for the MMAL decoder and output for rPI and rPI2 HEVC and H.264 hardware decoding for macOS and and iOS based on VideoToolbox New VA-API decoder and rendering for Linux Codecs BD-Java menus and overlay in Blu-Ray Experimental AV1 video and Daala video decoders OggSpots video decoder New MPEG-1 & 2 audio layer I, II, III + MPEG 2.5 decoder based on libmpg123 New BPG decoder based on libbpg TDSC, Canopus HQX, Cineform, SpeedHQ, Pixlet, QDMC and FMVC decoders TTML subtitles support, including EBU-TT variant Rewrite of webVTT subtitles support, including CSS style support BluRay text subtitles (HDMV) deocoder Support for ARIB-B24, CEA-708 New decoder for MIDI on macOS, iOS and Windows Containers Rework of the MP4 demuxer: including 608/708, Flip4Mac, XiphQT, VP8, TTML mappings Rework of the TS demuxer: including Opus, SCTE-18, ARIB mappings HD-DVD .evo support Rework of the PS demuxer, supporting HEVC, improving compatibility of broken files Improvements on MKV, including support for DVD-menus and FFv1, and faster seeking Support for Chained-Ogg, raw-HEVC and improvements for Flac Support for Creative ADPCM in AVI and VOC files Improved metadata formats in most file formats Protocols and devices Full support for Bluray Menus (BD-J) and Bluray ISO Rewrite of Adaptive Streaming protocols support Support for HLSv4 to HLSv7, including MP4 and ID3 cases Rewrite of DASH support, including MPEG2TS and ISOBMFF Support SAT>IP devices, for DVB-S via IP networks Support for HTTP 2.0 Support NFS, SMB and SFTP shares, with browsing Support for SRT streaming protocol Stream output and encoding Support for streaming to Chromecast devices Support for VP8 and VP9 encoding through libvpx Support for streaming Opus inside TS Support for mp4 fragmented muxing Improvements for x265 encoding Video outputs and filters OpenGL as Linux/BSD default video output Improvements in OpenGL output: direct displaying and HDR tonemapping Rework of the Android video outputs New Direct3D11 video output supporting both Windows desktop and WinRT modes HDR10 support in Direct3D11 with Windows 10 Fall Creator Update Hardware deinterlacing on the rPI, using MMAL Video filter to convert between fps rates Hardware accelerated deinterlacing/adjust/sharpen/chroma with VA-API Hardware accelerated adjust/invert/posterize/sepia/sharpen with CoreImage Hardware accelerated deinterlacing/adjust/chroma with D3D9 and D3D11 Audio outputs and filters Complete rewrite of the AudioTrack Android output New Tizen audio output HDMI/SPDIF pass-through support for WASAPI (AC3/DTS/DTSHD/EAC3/TRUEHD) Support EAC3 and TRUEHD pass-through for PulseAudio Rework of the AudioUnit modules to share more code between iOS and macOS SoX Resampler library audio filter module (converter and resampler) Ambisonics audio renderer, supporting up to 3rd order Binauralizer audio filter, working with Ambisonics or 5.1/7.1 streams Pitch shifting module OS Versions Windows XP ➔ 10 RS3 macOS 10.7 ➔ 10.13 iOS 7 ➔ 11 Android 2.3 ➔ 8.1 Android TV, Chromebooks with Play Store Windows RT 8.1, Windows Phone 8.1 Windows 10 Mobile, Xbox 1, Windows Store GNU/Linux, Ubuntu, *BSD Android specific Chromecast support from your phone HEVC hardware decoding using MediaCodec Android Auto with voice actions Available on all Android TV, Chromebooks & DeX Support for Picture-in-Picture Playlist files detection VLC SDK - libVLC New bindings for C++ and C++/CX New input-from-memory to implement custom protocols or DRM Support for ChromeCast and Renderer targets Improve API for servers discovery New API for dialogs, notably for HTTPS warnings New API to manage slaves inputs, including subtitles over the network Improve codec, format descriptions and associated metadata Improve EPG events API Better support for Android applications, native and Java ones Download VLC Windows VLC for Windows Version 3.0.13 Android macOS VLC for macOS Version 3.0.13 - 64bits iOS Windows Store and UWP Windows Phone Sources Get the source! Linux Ask your favorite packager for VLC 3.0! Related links Changelog Contact For any questions related to this release, please contact us . VLC media player VLC VLC for Windows VLC for Mac OS X VLC for Ubuntu VLC for Android VLC for iOS Skins Extensions Features Screenshots VLC Skin Editor All Projects VideoLan Movie Creator DVBlast x264 x262 x265 multicat dav1d VLMa libVLC libdvdcss libdvdnav libdvdread libbluray libdvbpsi libaacs libdvbcsa biTStream vlc-unity Community Wiki Forums Mailing-Lists FAQ Donate money Donate time Get Goodies VideoLAN Project and Organization Team Legal Contact us Partners Mirrors Press center Events Security center Get Involved News Legal | Report Trademark Abuse VideoLAN, VLC, VLC media player and x264 are trademarks internationally registered by the VideoLAN non-profit organization. VideoLAN software is licensed under various open-source licenses: use and distribution are defined by each software license. Design by Made By Argon . Some icons are licensed under the CC BY-SA 3.0+ . The VLC cone icon was designed by Richard Øiestad. Icons for VLMC, DVBlast and x264 designed by Roman Khramov . | 2026-01-13T09:30:35 |
https://support.microsoft.com/et-ee/account-billing/kaotsil%C3%A4inud-windowsi-seadme-leidmine-ja-lukustamine-890bf25e-b8ba-d3fe-8253-e98a12f26316 | Kaotsiläinud Windowsi seadme leidmine ja lukustamine - Microsofti tugiteenus Seotud teemad × Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Põhisisu juurde Microsoft Tugi Tugi Tugi Avaleht Microsoft 365 Office Tooted Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows rohkem… Seadmed Surface Arvuti tarvikud Xbox Arvutimängud HoloLens Surface Hub Riistvara garantiid Konto ja arveldamine konto Microsoft Store ja arveldamine Ressursid Mis on uut? Kogukonnafoorumid Microsoft 365 administraatorid Väikeettevõtete portaal Arendaja Haridus Teatage tehnilise toega seotud pettusest Tooteohutus Rohkem Osta Microsoft 365 Kogu Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Tugi Tarkvara Tarkvara Windowsi rakendused AI OneDrive Outlook Üleminek Skype'ilt Teamsile OneNote Microsoft Teams Arvutid ja seadmed Arvutid ja seadmed Accessories Meelelahutus Meelelahutus PC-mängud Äri Äri Microsofti turve Azure Dynamics 365 Microsoft 365 ettevõtteversioon Microsoft Industry Microsoft Power Platform Windows 365 Arendaja ja IT Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Visual Studio Marketplace Rewards Muud Muud Tasuta allalaadimised ja turve Kuva saidikaart Otsing Spikri otsing Tulemid puuduvad Loobu Logi sisse Logige sisse Microsofti kontoga Logige sisse või looge konto. Tere! Valige mõni muu konto. Teil on mitu kontot Valige konto, millega soovite sisse logida. Seotud teemad Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Kaotsiläinud Windowsi seadme leidmine ja lukustamine Rakenduskoht Microsoft account Windows 10 Windows 11 Microsofti konto andmelaud Seadme leidmise funktsioon aitab teil kadunud või varastatud Windows 10 või Windows 11 seadet leida. Selle funktsiooni kasutamiseks logige oma seadmesse sisse Microsofti kontoga ja veenduge, et olete selle konto administraator. See funktsioon töötab siis, kui teie seadmes jaoks on asukohafunktsioon sisse lülitatud, isegi kui seadme teised kasutajad on oma rakendustes asukohasätted välja lülitanud. Kui püüate seadme asukohta leida, näevad seadme kasutajad olekualal teatist. See säte toimib kõigis Windowsi seadmetes (nt lauaarvuti, sülearvuti, Surface või Surface'i pliiats). Enne kui saate seda kasutada, tuleb see sisse lülitada. Te ei saa seda kasutada töö- või koolikontoga ja see ei tööta iOS-i ja Androidi seadmetes ega Xbox One'i konsoolides. Kui teie Xbox varastatakse, tehke järgmist . Seadme leidmise funktsiooni sisselülitamine Uue seadme installimise ajal saate valida, kas lülitate seadme leidmise funktsiooni sisse või välja. Kui lülitasite selle installimise ajal välja ja nüüd soovite selle sisse lülitada, veenduge, et teie Windows-seade oleks selle asukoha saatmiseks Internetiga ühendatud, et aku oleks piisavalt täis ja olete seadmesse Microsofti kontoga sisse logitud. Tehke seadmes, mida soovite muuta, järgmist. Windows 11 : Valige Start > Sätted > Privaatsus & turvalisus > Seadme leidmine . Windows 10 : Valige Start > Sätted > Värskenda & turve > Seadme leidmine . Seadme leidmise sätete avamine Windowsi seadme otsimine Avage https://account.microsoft.com/devices ja logige sisse. Valige vahekaart Seadme leidmine . Valige seade, mida soovite otsida, ja seejärel käsk Otsi , et näha kaarti, mis näitab teie seadme asukohta. Märkus.: Ühisseadet saate üles otsida ainult siis, kui teil on selles administraatorikonto. Veendumaks, kas olete administraator, valige ühisseadmes Start > Sätted > Konto > Teie teave . Windowsi seadme lukustamine Kui leiate oma seadme kaardil, valige Lukusta > Edasi . Kui seade on lukustatud, saate suurema turvalisuse huvides oma parooli lähtestada. Paroolide kohta lisateabe saamiseks lugege artiklit Windowsi parooli muutmine või lähtestamine . Kas vajate rohkem abi? Tootetoe poole pöördumine Tehnilise toe saamiseks avage Microsofti tugiteenuste poole pöördumine , sisestage oma probleem ja valige Kasutajaabi . Kui vajate endiselt abi, valige juhiste saamiseks ja parima suvandini jõudmiseks Võtke ühendust kasutajatoega . TELLIGE RSS-KANALID Kas vajate veel abi? Kas soovite rohkem valikuvariante? Tutvustus Kogukonnafoorum Kontaktteave Siin saate tutvuda tellimusega kaasnevate eelistega, sirvida koolituskursusi, õppida seadet kaitsma ja teha veel palju muud. Microsoft 365 tellimuse eelised Microsoft 365 koolitus Microsofti turbeteenus Hõlbustuskeskus Kogukonnad aitavad teil küsimusi esitada ja neile vastuseid saada, anda tagasisidet ja saada nõu rikkalike teadmistega asjatundjatelt. Nõu küsimine Microsofti kogukonnafoorumis Microsofti spetsialistide kogukonnafoorum Windows Insideri programmis osalejad Microsoft 365 Insideri programmis osalejad Leidke lahendused levinud probleemidele või võtke ühendust klienditeenindajaga. Võrgutugi Kas sellest teabest oli abi? Jah Ei Aitäh! Veel tagasisidet Microsoftile? Kas saaksite aidata meil teenust paremaks muuta? (Saatke Microsoftile tagasisidet, et saaksime aidata.) Kui rahul te keelekvaliteediga olete? Mis mõjutas teie hinnangut? Leidsin oma probleemile lahenduse Juhised olid selged Tekstist oli lihtne aru saada Tekstis pole žargooni Piltidest oli abi Tõlkekvaliteet Tekst ei vastanud minu ekraanipildile Valed juhised Liiga tehniline Pole piisavalt teavet Pole piisavalt pilte Tõlkekvaliteet Kas soovite anda veel tagasisidet? (Valikuline) Saada tagasiside Kui klõpsate nuppu Edasta, kasutatakse teie tagasisidet Microsofti toodete ja teenuste täiustamiseks. IT-administraator saab neid andmeid koguda. Privaatsusavaldus. Täname tagasiside eest! × Mis on uut? Copilot organisatsioonidele Copilot isiklikuks kasutuseks Microsoft 365 Windows 11 rakendused Microsoft Store Konto profiil Allalaadimiskeskus Tagastused Tellimuse jälgimine Ringlussevõtt Commercial Warranties Haridus Microsoft Education Haridusseadmed Microsoft Teams haridusasutustele Microsoft 365 Education Office Education Haridustöötajate koolitus ja arendus Pakkumised õpilastele ja vanematele Azure õpilastele Äri Microsofti turve Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teamsi jaoks Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Ettevõte Töökohad Teave Microsofti kohta Privaatsus Microsoftis Investorid Jätkusuutlikkus Eesti (Eesti) Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Tarbijaseisundi privaatsus Võtke Microsoftiga ühendust Privaatsus Halda küpsiseid Kasutustingimused Kaubamärgid Reklaamide kohta EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
https://young-programmers.blogspot.com/2009/07/twitters-doug-williams-visits-my.html#main | Young Programmers Podcast: Twitter’s Doug Williams Visits My Programming Class skip to main | skip to sidebar Young Programmers Podcast A video podcast for computer programmers in grades 3 and up. We learn about Scratch, Tynker, Alice, Python, Pygame, and Scala, and interview interesting programmers. From professional software developer and teacher Dave Briccetti, and many special guests. Viewing the Videos or Subscribing to the Podcast Some of the entries have a picture, which you can click to access the video. Otherwise, to see the videos, use this icon to subscribe to or view the feed: Or, subscribe in iTunes Sunday, July 19, 2009 Twitter’s Doug Williams Visits My Programming Class Twitter's Doug Williams describes how he got started programming. See Twitter’s Doug Williams Visits My Programming Class : http://briccetti.blogspot.com/2009/07/twitters-doug-williams-visits-my.html at 9:13 PM Labels: guest , interview , twitter Newer Post Older Post Home About Me Dave Briccetti View my complete profile Where to Get Software Kojo Python Alice Scratch Other Blogs Dave Briccetti’s Blog One of My Best Classes Ever 10 years ago Tags alice (3) Android (1) arduino (1) art (1) audacity (2) dictionary (2) Flickr (1) functions (2) gamedev (1) garageband (1) GIMP (2) Google (2) guest (4) hacker (1) higher-order functions (1) inkscape (1) interview (9) Java (2) JavaFX (2) Jython (3) Kojo (2) lift (1) music (2) physics (1) platform (1) programmer (4) pygame (6) python (31) PythonCard (1) random (6) Sande (2) Scala (5) scratch (10) shdh (2) shdh34 (2) sound (3) sprite (2) Swing (3) teaching (3) twitter (2) Tynker (1) Web Services (1) xturtle (1) Followers Blog Archive ►  2015 (1) ►  February (1) ►  2013 (4) ►  July (1) ►  June (3) ►  2012 (2) ►  February (1) ►  January (1) ►  2011 (8) ►  November (1) ►  July (3) ►  May (1) ►  February (2) ►  January (1) ►  2010 (6) ►  October (2) ►  June (2) ►  February (2) ▼  2009 (37) ►  December (4) ►  November (1) ►  September (7) ►  August (11) ▼  July (14) Tor Norbye Shows JavaFX Recording Music for Computer Games Adding a Second Sprite to SimplePygame (Challenge 4) Solutions to SimplePygame Challenges 1–3 Xturtle With Loops to Make Polygons An Alice Object Reacts to Another Using GIMP to Make Graphics for Scratch and Alice Twitter’s Doug Williams Visits My Programming Class Random Number Problems and Python Solutions Random Numbers in Python: randint and choice Three Simple Python Problems and Their Solutions Power-Up and Shield Making Scratch Graphics with Inkscape Overview of Scratch, Alice, Python and Pygame   | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/AmazonS3/latest/userguide/bucketnamingrules.html | Aturan penamaan bucket tujuan umum - Amazon Simple Storage Service Aturan penamaan bucket tujuan umum - Amazon Simple Storage Service Dokumentasi Amazon Simple Storage Service (S3) Panduan Pengguna Aturan penamaan bucket tujuan umum Contoh nama bucket tujuan umum Praktik terbaik Membuat bucket yang menggunakan GUID dalam nama bucket Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Aturan penamaan bucket tujuan umum Saat membuat bucket tujuan umum, pastikan Anda mempertimbangkan panjang, karakter yang valid, pemformatan, dan keunikan nama bucket. Bagian berikut memberikan informasi tentang penamaan bucket tujuan umum, termasuk aturan penamaan, praktik terbaik, dan contoh untuk membuat bucket tujuan umum dengan nama yang menyertakan pengenal unik global (GUID). Untuk informasi tentang nama kunci objek, lihat Membuat nama kunci objek . Untuk membuat ember tujuan umum, lihat Membuat bucket tujuan umum . Topik Aturan penamaan bucket tujuan umum Contoh nama bucket tujuan umum Praktik terbaik Membuat bucket yang menggunakan GUID dalam nama bucket Aturan penamaan bucket tujuan umum Aturan penamaan berikut berlaku untuk bucket tujuan umum. Panjang nama bucket harus antara 3 (menit) dan 63 (maks) karakter. Nama bucket hanya dapat terdiri dari huruf kecil, angka, titik ( . ), dan tanda hubung (). - Nama bucket harus diawali dan juga diakhiri dengan huruf atau, nomor. Nama bucket tidak boleh berisi dua periode yang berdekatan. Nama bucket tidak boleh diformat sebagai alamat IP (misalnya, 192.168.5.4 ). Nama bucket tidak boleh dimulai dengan prefiks xn-- . Nama bucket tidak boleh dimulai dengan prefiks sthree- . Nama bucket tidak boleh dimulai dengan prefiks amzn-s3-demo- . Nama bucket tidak boleh diakhiri dengan sufiks -s3alias . Sufiks ini dicadangkan untuk nama alias titik akses. Untuk informasi selengkapnya, lihat Alias titik akses . Nama bucket tidak boleh diakhiri dengan sufiks --ol-s3 . Sufiks ini dicadangkan untuk nama alias Titik Akses Lambda Objek. Untuk informasi selengkapnya, lihat Cara menggunakan alias gaya bucket untuk bucket S3 Anda Titik Akses Lambda Objek . Nama bucket tidak boleh diakhiri dengan sufiks .mrap . Akhiran ini dicadangkan untuk nama Titik Akses Multi-Wilayah. Untuk informasi selengkapnya, lihat Aturan penamaan Titik Akses Multi-Wilayah Amazon S3 . Nama bucket tidak boleh diakhiri dengan sufiks --x-s3 . Akhiran ini dicadangkan untuk ember direktori. Untuk informasi selengkapnya, lihat Aturan penamaan bucket direktori . Nama bucket tidak boleh diakhiri dengan sufiks --table-s3 . Sufiks ini dicadangkan untuk ember Tabel S3. Untuk informasi selengkapnya, lihat Aturan penamaan bucket, tabel, dan namespace Amazon S3 . Bucket yang digunakan dengan Amazon S3 Transfer Acceleration tidak dapat memiliki period . () dalam namanya. Untuk informasi lebih lanjut tentang Transfer Acceleration, lihat Mengonfigurasi transfer file yang cepat dan aman menggunakan Amazon S3 Transfer Acceleration . penting Nama bucket harus unik Akun AWS di semua bagian Wilayah AWS dalam partisi. Partisi adalah pengelompokan Wilayah. AWS Saat ini memiliki tiga partisi: aws (Kawasan komersial), aws-cn (Wilayah Tiongkok), dan aws-us-gov (AWS GovCloud (US) Wilayah). Nama bucket tidak dapat digunakan oleh orang lain Akun AWS di partisi yang sama sampai bucket dihapus. Setelah Anda menghapus bucket, ketahuilah bahwa bucket lain Akun AWS di partisi yang sama dapat menggunakan nama bucket yang sama untuk bucket baru dan karenanya berpotensi menerima permintaan yang ditujukan untuk bucket yang dihapus. Jika Anda ingin mencegah hal ini, atau jika Anda ingin terus menggunakan nama bucket yang sama, jangan hapus bucket. Kami menyarankan Anda mengosongkan ember dan menyimpannya, dan sebagai gantinya, memblokir permintaan bucket apa pun sesuai kebutuhan. Untuk ember yang tidak lagi digunakan secara aktif, kami sarankan untuk mengosongkan ember dari semua objek untuk meminimalkan biaya sambil mempertahankan ember itu sendiri. Saat Anda membuat ember tujuan umum, Anda memilih namanya dan Wilayah AWS untuk membuatnya. Setelah membuat bucket tujuan umum, Anda tidak dapat mengubah nama atau Region. Jangan sertakan informasi sensitif dalam nama bucket. Nama bucket terlihat dalam URLs yang menunjuk objek dalam bucket. catatan Sebelum 1 Maret 2018, bucket yang dibuat di Wilayah AS Timur (Virginia Utara) dapat memiliki nama hingga mencapai 255 karakter, dan menyertakan huruf besar dan garis bawah. Mulai 1 Maret 2018, bucket baru di AS Timur (Virginia Utara) harus mematuhi aturan yang sama yang berlaku di semua Wilayah lainnya. Contoh nama bucket tujuan umum Nama bucket berikut menunjukkan contoh karakter mana yang diizinkan dalam nama bucket tujuan umum: a-z, 0-9, dan tanda hubung (). - Awalan amzn-s3-demo- cadangan digunakan di sini hanya untuk ilustrasi. Karena ini adalah awalan cadangan, Anda tidak dapat membuat nama bucket yang dimulai dengan amzn-s3-demo- . amzn-s3-demo-bucket1-a1b2c3d4-5678-90ab-cdef-example11111 amzn-s3-demo-bucket Contoh nama bucket berikut valid tetapi tidak direkomendasikan untuk penggunaan selain hosting situs web statis karena mengandung periode ( . ): example.com www.example.com my.example.s3.bucket Contoh nama bucket berikut bersifat tidak valid: amzn_s3_demo_bucket (mengandung garis bawah) AmznS3DemoBucket (mengandung huruf besar) amzn-s3-demo-bucket- (dimulai dengan amzn-s3-demo- awalan dan diakhiri dengan tanda hubung) example..com (berisi dua periode berturut-turut) 192.168.5.4 (cocok dengan format alamat IP) Praktik terbaik Saat memberi nama bucket tujuan umum Anda, pertimbangkan praktik terbaik penamaan bucket berikut. Pilih skema penamaan ember yang tidak mungkin menyebabkan konflik penamaan Jika aplikasi Anda secara otomatis membuat bucket, pilih skema penamaan bucket yang tidak mungkin menyebabkan konflik penamaan. Pastikan bahwa logika aplikasi Anda akan memilih nama bucket yang berbeda jika nama bucket-nya sudah diambil. Tambahkan pengidentifikasi unik global (GUIDs) ke nama bucket Sebaiknya buat nama { i>bucket <i}yang tidak dapat diprediksi. Jangan menulis kode dengan asumsi nama { i>bucket <i}pilihan Anda tersedia kecuali Anda telah membuat { i>bucket<i}. Salah satu metode untuk membuat nama bucket yang tidak dapat diprediksi adalah dengan menambahkan Globally Unique Identifier (GUID) ke nama bucket Anda, misalnya,. amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111 Untuk informasi selengkapnya, lihat Membuat bucket yang menggunakan GUID dalam nama bucket . Hindari menggunakan period ( . ) dalam nama bucket Untuk kompatibilitas terbaik, sebaiknya hindari penggunaan period ( . ) dalam nama bucket, kecuali bucket yang hanya digunakan untuk hosting situs statis. Jika menyertakan periode dalam nama bucket, Anda tidak dapat menggunakan virtual-host-style pengalamatan melalui HTTPS, kecuali Anda melakukan validasi sertifikat sendiri. Sertifikat keamanan yang digunakan untuk hosting virtual bucket tidak berfungsi untuk ember dengan titik dalam nama mereka. Keterbatasan ini tidak memengaruhi bucket yang digunakan untuk hosting situs web statis, karena hosting situs web statis hanya tersedia melalui HTTP. Untuk informasi lebih lanjut tentang virtual-host-style pengalamatan, lihat Hosting virtual dari ember tujuan umum . Untuk informasi tentang hosting situs web, lihat Hosting situs web statis menggunakan Amazon S3 . Pilih nama yang relevan Saat Anda memberi nama ember, kami sarankan Anda memilih nama yang relevan dengan Anda atau bisnis Anda. Hindari menggunakan nama yang terkait dengan orang lain. Misalnya, hindari menggunakan AWS atau Amazon dalam nama bucket Anda. Jangan hapus bucket sehingga Anda dapat menggunakan kembali nama bucket Jika sebuah bucket kosong, Anda dapat menghapusnya. Setelah Anda menghapus bucket, namanya akan tersedia untuk digunakan kembali. Namun, Anda tidak dijamin dapat menggunakan kembali nama itu segera, atau sama sekali. Setelah Anda menghapus bucket, beberapa waktu mungkin berlalu sebelum Anda dapat menggunakan kembali nama tersebut. Selain itu, yang lain Akun AWS mungkin membuat ember dengan nama yang sama sebelum Anda dapat menggunakan kembali nama tersebut. Setelah Anda menghapus bucket tujuan umum, ketahuilah bahwa bucket lain Akun AWS di partisi yang sama dapat menggunakan nama bucket yang sama untuk bucket baru dan karenanya berpotensi menerima permintaan yang ditujukan untuk bucket tujuan umum yang dihapus. Jika Anda ingin mencegah hal ini, atau jika Anda ingin terus menggunakan nama bucket tujuan umum yang sama, jangan hapus bucket tujuan umum. Kami menyarankan Anda mengosongkan ember dan menyimpannya, dan sebagai gantinya, memblokir permintaan bucket apa pun sesuai kebutuhan. Membuat bucket yang menggunakan GUID dalam nama bucket Contoh berikut menunjukkan cara membuat bucket tujuan umum yang menggunakan GUID di akhir nama bucket. AWS CLI Contoh berikut membuat bucket tujuan umum di Wilayah Wilayah () AS Barat (California) dengan contoh nama bucket yang menggunakan pengenal unik global (GUID). us-west-1 Untuk menggunakan perintah contoh ini, ganti user input placeholders dengan informasi Anda sendiri. aws s3api create-bucket \ --bucket amzn-s3-demo-bucket1 $(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \ --region us-west-1 \ --create-bucket-configuration LocationConstraint= us-west-1 Contoh berikut menunjukkan kepada Anda cara membuat dengan GUID di akhir nama bucket di Wilayah AS Timur (Virginia Utara) ( us-east-1 ) dengan menggunakan. AWS SDK untuk Java Untuk menggunakan contoh ini, ganti user input placeholders dengan informasi Anda sendiri. Untuk informasi tentang lainnya AWS SDKs, lihat Alat untuk Dibangun AWS . import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.Bucket; import com.amazonaws.services.s3.model.CreateBucketRequest; import java.util.List; import java.util.UUID; public class CreateBucketWithUUID { public static void main(String[] args) { final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions. US_EAST_1 ).build(); String bucketName = " amzn-s3-demo-bucket " + UUID.randomUUID().toString().replace("-", ""); CreateBucketRequest createRequest = new CreateBucketRequest(bucketName); System.out.println(bucketName); s3.createBucket(createRequest); } } Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Pola ember umum Kuota, batasan, dan batasan Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/lambda/latest/dg/with-s3-example.html | Tutorial: Uso de un desencadenador de Amazon S3 para invocar una función de Lambda - AWS Lambda Tutorial: Uso de un desencadenador de Amazon S3 para invocar una función de Lambda - AWS Lambda Documentación AWS Lambda Guía para desarrolladores Crear un bucket de Amazon S3 Cargar un objeto de prueba en un bucket Creación de una política de permisos Creación de un rol de ejecución Crear la función de Lambda Implementar el código de la función Cree un desencadenador de Amazon S3 Probar la función de Lambda Eliminación de sus recursos Pasos a seguir a continuación Tutorial: Uso de un desencadenador de Amazon S3 para invocar una función de Lambda En este tutorial, se utiliza la consola a fin de crear una función de Lambda y configurar un desencadenador para un bucket de Amazon Simple Storage Service (Amazon S3). Cada vez que agrega un objeto al bucket de Amazon S3, la función se ejecuta y muestra el tipo de objeto en Registros de Amazon CloudWatch. En este tutorial se muestra cómo: Cree un bucket de Amazon S3. Cree una función de Lambda que devuelva el tipo de objeto de los objetos en un bucket de Amazon S3. Configure un desencadenador de Lambda que invoque su función cuando se carguen objetos en su bucket. Pruebe su función, primero con un evento de prueba y, a continuación, con el desencadenador. Al completar estos pasos, aprenderá a configurar una función de Lambda para que se ejecute siempre que se agreguen objetos a un bucket de Amazon S3 o se eliminen de él. Solo puede completar este tutorial mediante la Consola de administración de AWS. Crear un bucket de Amazon S3 Creación de un bucket de Amazon S3 Abra la consola de Amazon S3 y seleccione la página Buckets de uso general . Seleccione la Región de AWS más cercana a su ubicación geográfica. Puede cambiar la región por medio de la lista desplegable de la parte superior de la pantalla. Más adelante en el tutorial, debe crear la función de Lambda en la misma región. Elija Crear bucket . En Configuración general , haga lo siguiente: En Tipo de bucket , asegúrese de que Uso general está seleccionado. Para el nombre del bucket , ingrese un nombre único a nivel mundial que cumpla las reglas de nomenclatura de bucket de Amazon S3. Los nombres de bucket pueden contener únicamente letras minúsculas, números, puntos (.) y guiones (-). Deje el resto de las opciones con sus valores predeterminados y seleccione Crear bucket . Cargar un objeto de prueba en un bucket Para cargar un objeto de prueba Abra la página Buckets de la consola de Amazon S3 y elija el bucket que creó durante el paso anterior. Seleccione Cargar . Elija Agregar archivos y seleccione el objeto que desea cargar. Puede seleccionar cualquier archivo (por ejemplo, HappyFace.jpg ). Elija Abrir y, a continuación, Cargar . Más adelante en el tutorial, probará la función de Lambda con este objeto. Creación de una política de permisos Cree una política de permisos que le permita a Lambda obtener objetos de un bucket de Amazon S3 y escribir en los Registros de Amazon CloudWatch. Para crear la política de Abra la página de Policies (Políticas) de la consola de IAM. Elija Crear política . Elija la pestaña JSON y pegue la siguiente política personalizada en el editor JSON. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Elija Siguiente: Etiquetas . Elija Siguiente: Revisar . En Review policy (Revisar política) , para el Name (Nombre) de la política, ingrese s3-trigger-tutorial . Seleccione Crear política . Creación de un rol de ejecución Un rol de ejecución es un rol de AWS Identity and Access Management (IAM) que concede a la función de Lambda permiso para acceder a recursos y Servicios de AWS. En este paso, creará un rol de ejecución mediante la política de permisos que creó en el paso anterior. Para crear una función de ejecución y adjuntar su política de permisos personalizada Abra la página Roles en la consola de IAM. Elija Create role . Para el tipo de entidad de confianza, seleccione Servicio de AWS y, para el caso de uso, elija Lambda . Elija Siguiente . En el cuadro de búsqueda de políticas, escriba s3-trigger-tutorial . En los resultados de búsqueda, seleccione la política que ha creado ( s3-trigger-tutorial ), y luego Next (Siguiente). En Role details (Detalles del rol), introduzca lambda-s3-trigger-role en Role name (Nombre del rol) y, luego, elija Create role (Crear rol). Crear la función de Lambda Cree una función de Lambda en la consola con el tiempo de ejecución de Python 3.13. Para crear la función de Lambda Abra la página de Funciones en la consola de Lambda. Asegúrese de trabajar en la misma Región de AWS en la que creó el bucket de Amazon S3. Puede cambiar la región mediante la lista desplegable de la parte superior de la pantalla. Seleccione Creación de función . Elija Crear desde cero . Bajo Información básica , haga lo siguiente: En Nombre de la función , ingrese s3-trigger-tutorial . En Tiempo de ejecución , seleccione Python 3.12 . En Arquitectura , elija x86_64 . En la pestaña Cambiar rol de ejecución predeterminado , haga lo siguiente: Amplíe la pestaña y, a continuación, elija Utilizar un rol existente . Seleccione el lambda-s3-trigger-role que creó anteriormente. Seleccione Creación de función . Implementar el código de la función En este tutorial, se utiliza el tiempo de ejecución de Python 3.13, pero también proporcionamos archivos de código de ejemplo para otros tiempos de ejecución. Puede seleccionar la pestaña del siguiente cuadro para ver el código del tiempo de ejecución que le interesa. La función de Lambda recuperará el nombre de clave del objeto cargado y el nombre del bucket desde el parámetro event que recibe de Amazon S3. A continuación, la función utiliza el método get_object de AWS SDK para Python (Boto3) para recuperar los metadatos del objeto, incluido el tipo de contenido (tipo MIME) del objeto cargado. Para implementar el código de la función Seleccione la pestaña Python en el siguiente cuadro y copie el código. .NET SDK para .NET nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK para Go V2 nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK para Java 2.x nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK para JavaScript (v3) nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Uso de un evento de S3 con Lambda mediante TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK para PHP nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos de tecnología sin servidor . Consumo de un evento de S3 con Lambda mediante PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK para Python (Boto3) nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK para Ruby nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos de tecnología sin servidor . Consumo de un evento de S3 con Lambda mediante Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK para Rust nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } En el panel Código fuente de la consola de Lambda, pegue el siguiente código en el editor de código y sustituya el código creado por Lambda. En la sección IMPLEMENTAR elija Implementar para actualizar el código de la función: Cree un desencadenador de Amazon S3 Para crear el desencadenador de Amazon S3 En el panel Información general de la función , elija Agregar desencadenador . Seleccione S3 . En Bucket , seleccione el bucket que creó anteriormente en el tutorial. En Tipos de eventos , asegúrese de seleccionar Todos los eventos de creación de objetos . En Invocación recursiva , marque la casilla de verificación para confirmar que no se recomienda utilizar el mismo bucket de Amazon S3 para la entrada y la salida. Elija Agregar . nota Cuando crea un desencadenador de Amazon S3 para una función de Lambda mediante la consola de Lambda, Amazon S3 configura una notificación de eventos en el bucket que especifique. Antes de configurar esta notificación de evento, Amazon S3 realiza una serie de comprobaciones para confirmar que el destino del evento existe y tiene las políticas de IAM requeridas. Amazon S3 también realiza estas pruebas en cualquier otra notificación de eventos configurada para ese bucket. Gracias a esta comprobación, si el bucket ha configurado previamente destinos de eventos para recursos que ya no existen o para recursos que no tienen las políticas de permisos requeridas, Amazon S3 no podrá crear la nueva notificación de evento. Verá el siguiente mensaje de error que indica que no se ha podido crear el desencadenador: An error occurred when creating the trigger: Unable to validate the following destination configurations. Puede ver este error si anteriormente configuró un desencadenador para otra función de Lambda con el mismo bucket y, desde entonces, ha eliminado la función o modificado sus políticas de permisos. Probar la función de Lambda con un evento de prueba Para probar la función de Lambda con un evento de prueba En la página de la consola de Lambda para la función, seleccione la pestaña Prueba . En Nombre del evento , escriba MyTestEvent . En el Evento JSON , pegue el siguiente evento de prueba. Asegúrese de reemplazar los siguientes valores: Reemplace us-east-1 por la región en la que creó el bucket de Amazon S3. Reemplace ambas instancias de amzn-s3-demo-bucket por el nombre de su bucket de Amazon S3. Reemplace test%2FKey por el nombre del objeto de prueba que cargó anteriormente al bucket (por ejemplo, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Seleccione Save . Seleccione Test (Probar) . Si la función se ejecuta correctamente, verá un resultado similar al siguiente en la pestaña Resultados de ejecución . Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Probar la función de Lambda con el desencadenador de Amazon S3 Para probar la función con el desencadenador configurado, se carga un objeto al bucket de Amazon S3 mediante la consola. Para comprobar que la función de Lambda se ha ejecutado correctamente, utilice Registros de CloudWatch para ver la salida de la función. Para cargar un objeto en un bucket de Amazon S3 Abra la página Buckets de la consola de Amazon S3 y elija el nombre del bucket que creó anteriormente. Seleccione Cargar . Elija Agregar archivos y utilice el selector de archivos para elegir el objeto que desee cargar. Este objeto puede ser cualquier archivo que elija. Elija Abrir y, a continuación, Cargar . Comprobación de la invocación de la función mediante Registros de CloudWatch Abra la consola de CloudWatch . Asegúrese de trabajar en la misma Región de AWS en la que creó la función de Lambda. Puede cambiar la región mediante la lista desplegable de la parte superior de la pantalla. Elija Registros y, a continuación, Grupos de registro . Elija el grupo de registro para la función ( /aws/lambda/s3-trigger-tutorial ). En Flujos de registro , elija el flujo de registro más reciente. Si su función se ha invocado correctamente en respuesta a su desencadenador de Amazon S3, verá un resultado similar al siguiente. El CONTENT TYPE que vea depende del tipo de archivo que haya subido a su bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Eliminación de sus recursos A menos que desee conservar los recursos que creó para este tutorial, puede eliminarlos ahora. Si elimina los recursos de AWS que ya no utiliza, evitará gastos innecesarios en su Cuenta de AWS. Cómo eliminar la función de Lambda Abra la página de Funciones en la consola de Lambda. Seleccione la función que ha creado. Elija Acciones , Eliminar . Escriba confirm en el campo de entrada de texto y elija Delete (Eliminar). Cómo eliminar el rol de ejecución Abra la página Roles en la consola de IAM. Seleccione el rol de ejecución que creó. Elija Eliminar . Si desea continuar, escriba el nombre del rol en el campo de entrada de texto y elija Delete (Eliminar). Para eliminar el bucket de S3 Abra la consola de Amazon S3 . Seleccione el bucket que ha creado. Elija Eliminar . Introduzca el nombre del bucket en el campo de entrada de texto. Elija Delete bucket (Eliminar bucket) . Pasos a seguir a continuación En Tutorial: Uso de un desencadenador de Amazon S3 para crear imágenes en miniatura , el desencadenador de Amazon S3 invoca una función que crea una imagen en miniatura para cada archivo de imagen que se carga en el bucket. Este tutorial requiere un nivel moderado de conocimiento del dominio de AWS y Lambda. Demuestra cómo crear recursos usando la AWS Command Line Interface (AWS CLI) y cómo crear un paquete de implementación de archivo .zip para la función y sus dependencias. JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento S3 Tutorial: Uso de un desencadenador de Amazon S3 para crear miniaturas ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:30:35 |
https://penneo.com/da/general-administration/ | Administration og HR - Penneo Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Reducer manuelt arbejde og øg medarbejdertilfredsheden Penneo gør det muligt for HR- og administrative teams at strømline dokumentunderskrivelsesprocesser, forenkle GDPR-overholdelse og frigøre værdifuld tid til at fokusere på at forbedre medarbejderoplevelsen. UDFORSK PENNEO Er du træt af administrative opgaver? Forældede dokumentunderskriftsprocesser kan have en alvorlig indvirkning på effektiviteten, hvilket resulterer i spildtid, flere fejl og unødvendige administrative byrder. Manuelle opgaver som at printe, scanne og arkivere af dokumenter bruger værdifulde timer og forhindrer HR-teams i at fokusere på initiativer med stor effekt som talentudvikling og medarbejderengagement. Denne ineffektivitet bremser ikke kun driften, men skaber også compliancerisici og en frustrerende oplevelse for både medarbejdere og kandidater. Se, hvordan Penneo kan hjælpe dig Integrer Penneo med dine eksisterende værktøjer, og lad dem arbejde for dig Ved at forbinde Penneo med din løn-, rekrutterings- eller dokumenthåndteringssoftware kan du reducere fejl, fremskynde godkendelser og frigøre værdifuld tid. Penneo tilbyder forudbyggede integrationer med software som Talentech, Emply, Timeplan, 4human, Documendo og M-Files . Derudover kan du oprette tilpassede integrationer, der er skræddersyet til dine specifikke behov, ved hjælp af Penneos åbne API. Se alle funktioner Automatiser dokumentunderskrivningen og forbedr medarbejderoplevelsen Penneo minimerer arbejdsbyrden i forbindelse med dokumentunderskrivelse og bidrager til en bedre medarbejderoplevelse ved at: Automatisere underskrift af ansættelseskontrakter, bonusaftaler og anden nødvendig dokumentation. Tilbyde sikre og juridisk bindende digitale signaturer med eID’er eller pas. Give medarbejdere og kandidater mulighed for at underskrive dokumenter hvor som helst og på hvilken som helst enhed. Se alle funktioner Reducer den administrative byrde og sikr dine dokumenter Med adskillige integrationer og automatiserede signeringsflows kan HR-medarbejdere eliminere gentagne opgaver, fremskynde dokumentunderskrivelsesprocesser og forbedre medarbejderoplevelsen . Penneo overholder GDPR, har ISO 27001- og 27701-certificeringer og bruger kryptering til at beskytte data og dokumenter på alle trin. Med Penneo kan du oprette kvalificerede elektroniske signaturer (QES) ved hjælp af pas, itsme®, BankID Norge eller .beID, samt avancerede elektroniske signaturer (AdES) med MitID, MitID Erhverv eller BankID Sverige. Slip for besværet med at følge op på underskrifter med automatiske påmindelser. Ved at reducere den manuelle opfølgning kan du fremskynde underskriftsprocesserne og frigøre tid. Penneo giver HR-teams mulighed for at spore, hvornår dokumenter sendes, åbnes, underskrives eller afsluttes, hvilket sikrer fuld synlighed i underskriftsprocessen . Identificer flaskehalse tidligt, og hold arbejdsgangene til tiden. Modernisering af dine HR-processer fremmer effektiviteten , øger medarbejdernes engagement og styrker din organisations omdømme som en fremsynet arbejdsgiver. Se, hvordan Penneo fungerer Over 3000 companies trust Penneo Se hvordan Penneo kan hjælpe dig BOOK ET UFORPLIGTENDE MØDE Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:35 |
https://hackmd.io/blog?utm_source=blog&utm_medium=footer | The HackMD Blog: Home Blog Product Company Changelog Education Sign in Sign in Get HackMD free # en # company 2025 in Review: New Features, Big Wins, and What Lies Ahead Dec 31, 2025 By Chaseton Collins Read more Recent posts # en # use-case Build Better AI Workflows: Creating and Organizing Claude Skills in HackMD A practical guide to understanding Claude Skills and learning how to build, structure, and organize them inside HackMD using templates, folders, and reusable markdown systems. Dec 10, 2025 By Chaseton Collins # en # company Touch down at JSConf 2025: HackMD connects with the JavaScript community A fun recap of HackMD’s experience at JSConf 2025 featuring photos, community moments, event highlights, and insights from the people shaping the future of JavaScript. Nov 19, 2025 By Chaseton Collins # en # newsletter Harvest new ideas with HackMD this November This season of gratitude, we’re celebrating the ideas that bring people together. Discover how HackMD’s latest updates make it easier to share knowledge, collaborate seamlessly, and stay organized through the holidays. Nov 12, 2025 By Chaseton Collins # en # use-case From Wallet to Workspace: Decentralize using Sign-in with ETH Reintroducing Sign in with Wallet on HackMD. Use your Ethereum wallet to access docs with ownership, privacy, and speed. Step-by-step guide to connect included. Nov 5, 2025 By Chaseton Collins # en # product Shareable Links: A closer look at the newest update on HackMD Explore HackMD's latest update, Shareable Links. Now you can generate a secure, customizable link to invite anyone directly. Set permissions like read-only or full-edit access, add an expiration date, and even limit how many times the link can be used. Oct 27, 2025 By Chaseton Collins # en # newsletter Fall into focus with HackMD this October Did you miss HackMD's exciting updates over the past month? Catch up on that and more in October's Markdown Memo. Oct 15, 2025 By Chaseton Collins 1 2 3 20 Subscribe to our newsletter Build with confidence. Never miss a beat. Learn about the latest product updates, company happenings, and technical guides in our monthly newsletter. Subscribe Changelog View Changelog Improved Tag Management Manage Tags in Bulk from the Sidebar Sep 23, 2025 Profile Overhaul: Pin Notes, Categories & Connections Your new profile: The business card for your expertise. Sep 4, 2025 Cite Paragraphs, Stay Connected Quickly add quotes with automatic footnotes and discover how your ideas inspire others. Aug 5, 2025 Use Guided Comments to Spark Better Feedback Guided Comments gently prompt visitors, making feedback easier and more meaningful. Jul 8, 2025 Get started for free Play around with it first. Pay and add your team later. Get started for free Build together with the ultimate Markdown editor. Learning Features Tutorial book Resources Blog Changelog Enterprise Pricing Company About Press Kit Trust Center Terms of use Privacy policy English 中文 日本語 © 2026 HackMD. All Rights Reserved. | 2026-01-13T09:30:35 |
https://penneo.com/da/general-administration/ | Administration og HR - Penneo Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Brancher Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus LOG PÅ Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ BOOK ET MØDE GRATIS PRØVEPERIODE DA EN NO FR NL Produkter Penneo Sign Validator Hvorfor Penneo Integrationer Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Priser Ressourcer Vidensunivers Trust Center Produktopdateringer SIGN Hjælpecenter KYC Hjælpecenter Systemstatus BOOK ET MØDE GRATIS PRØVEPERIODE LOG PÅ DA EN NO FR NL Penneo Sign Log ind på Penneo Sign. LOG PÅ Penneo KYC Log ind på Penneo KYC. LOG PÅ Reducer manuelt arbejde og øg medarbejdertilfredsheden Penneo gør det muligt for HR- og administrative teams at strømline dokumentunderskrivelsesprocesser, forenkle GDPR-overholdelse og frigøre værdifuld tid til at fokusere på at forbedre medarbejderoplevelsen. UDFORSK PENNEO Er du træt af administrative opgaver? Forældede dokumentunderskriftsprocesser kan have en alvorlig indvirkning på effektiviteten, hvilket resulterer i spildtid, flere fejl og unødvendige administrative byrder. Manuelle opgaver som at printe, scanne og arkivere af dokumenter bruger værdifulde timer og forhindrer HR-teams i at fokusere på initiativer med stor effekt som talentudvikling og medarbejderengagement. Denne ineffektivitet bremser ikke kun driften, men skaber også compliancerisici og en frustrerende oplevelse for både medarbejdere og kandidater. Se, hvordan Penneo kan hjælpe dig Integrer Penneo med dine eksisterende værktøjer, og lad dem arbejde for dig Ved at forbinde Penneo med din løn-, rekrutterings- eller dokumenthåndteringssoftware kan du reducere fejl, fremskynde godkendelser og frigøre værdifuld tid. Penneo tilbyder forudbyggede integrationer med software som Talentech, Emply, Timeplan, 4human, Documendo og M-Files . Derudover kan du oprette tilpassede integrationer, der er skræddersyet til dine specifikke behov, ved hjælp af Penneos åbne API. Se alle funktioner Automatiser dokumentunderskrivningen og forbedr medarbejderoplevelsen Penneo minimerer arbejdsbyrden i forbindelse med dokumentunderskrivelse og bidrager til en bedre medarbejderoplevelse ved at: Automatisere underskrift af ansættelseskontrakter, bonusaftaler og anden nødvendig dokumentation. Tilbyde sikre og juridisk bindende digitale signaturer med eID’er eller pas. Give medarbejdere og kandidater mulighed for at underskrive dokumenter hvor som helst og på hvilken som helst enhed. Se alle funktioner Reducer den administrative byrde og sikr dine dokumenter Med adskillige integrationer og automatiserede signeringsflows kan HR-medarbejdere eliminere gentagne opgaver, fremskynde dokumentunderskrivelsesprocesser og forbedre medarbejderoplevelsen . Penneo overholder GDPR, har ISO 27001- og 27701-certificeringer og bruger kryptering til at beskytte data og dokumenter på alle trin. Med Penneo kan du oprette kvalificerede elektroniske signaturer (QES) ved hjælp af pas, itsme®, BankID Norge eller .beID, samt avancerede elektroniske signaturer (AdES) med MitID, MitID Erhverv eller BankID Sverige. Slip for besværet med at følge op på underskrifter med automatiske påmindelser. Ved at reducere den manuelle opfølgning kan du fremskynde underskriftsprocesserne og frigøre tid. Penneo giver HR-teams mulighed for at spore, hvornår dokumenter sendes, åbnes, underskrives eller afsluttes, hvilket sikrer fuld synlighed i underskriftsprocessen . Identificer flaskehalse tidligt, og hold arbejdsgangene til tiden. Modernisering af dine HR-processer fremmer effektiviteten , øger medarbejdernes engagement og styrker din organisations omdømme som en fremsynet arbejdsgiver. Se, hvordan Penneo fungerer Over 3000 companies trust Penneo Se hvordan Penneo kan hjælpe dig BOOK ET UFORPLIGTENDE MØDE Produkter Penneo Sign Priser Integrationer Åben API Validator Hvorfor Penneo Løsninger Revision og regnskab Finans og bank Advokatydelser Ejendom Administration og HR Anvendelsesscenarier Digital signering Dokumenthåndtering Udfyld og underskriv PDF-formularer Automatisering af underskriftsprocesser Overholdelse af eIDAS Ressourcer Vidensunivers Trust Center Produktopdateringer SUPPORT SIGN Hjælpecenter KYC Hjælpecenter Systemstatus Virksomhed Om os Karriere Privatlivspolitik Vilkår Brug af cookies Accessibility Statement Whistleblower Policy Kontakt os PENNEO A/S - Gærtorvet 1-5, DK-1799 København V - CVR: 35633766 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/it_it/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | Inizia con CloudFront - Amazon CloudFront Inizia con CloudFront - Amazon CloudFront Documentazione Amazon CloudFront Guida per gli sviluppatori Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Inizia con CloudFront Gli argomenti di questa sezione mostrano come iniziare a distribuire i tuoi contenuti con Amazon CloudFront. L' Configura il tuo Account AWS argomento descrive i prerequisiti per i seguenti tutorial, come la creazione di un utente Account AWS e la creazione di un utente con accesso amministrativo. Il tutorial sulla distribuzione di base mostra come configurare il controllo di accesso origine (OAC) per inviare richieste autenticate a un’origine Amazon S3. Il tutorial sul sito web statico sicuro mostra come creare un sito web statico sicuro per il nome di dominio utilizzando OAC con un’origine Amazon S3. Il tutorial utilizza un modello Amazon CloudFront (CloudFront) per la configurazione e la distribuzione. Argomenti Configura il tuo Account AWS Inizia con una distribuzione CloudFront standard Nozioni di base su una distribuzione standard (AWS CLI) Nozioni di base sull’utilizzo di un sito web statico sicuro JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Utilizzo di AWS SDK Configura il tuo Account AWS Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:30:35 |
https://support.microsoft.com/sv-se/windows/hantera-cookies-i-microsoft-edge-visa-till%C3%A5ta-blockera-ta-bort-och-anv%C3%A4nda-168dab11-0753-043d-7c16-ede5947fc64d | Hantera cookies i Microsoft Edge: Visa, tillåta, blockera, ta bort och använda - Microsoft Support Relaterade ämnen × Windows säkerhet och sekretess Översikt Översikt för säkerhet och sekretess Windows-säkerhet Få hjälp med Windows-säkerhet Skydda dig med Windows-säkerhet Innan du återanvänder, säljer eller ger bort din Xbox- eller Windows-dator Ta bort skadlig kod från din Windows-dator Windows-säkerhet Få hjälp med Windows-säkerhet Visa och ta bort webbhistorik i Microsoft Edge Ta bort och hantera cookies Ta bort värdefullt innehåll på ett säkert sätt när du installerar om Windows Hitta och låsa en borttappad Windows-enhet Windows-sekretess Få hjälp med Windows-sekretess Sekretessinställningar i Windows som används av appar Visa dina data på sekretesspanelen Gå till huvudinnehåll Microsoft Support Support Support Startsida Microsoft 365 Office Produkter Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows mer... Enheter Surface Datortillbehör Xbox PC-spel HoloLens Surface Hub Maskinvarugarantier Konto och fakturering Konto Microsoft Store & faktureringshjälp Resurser Nyheter Community-forum Microsoft 365 administratörer Portal för småföretag Utvecklare Utbildning Rapportera ett supportbedrägeri Produktsäkerhet Mer Köp Microsoft 365 Hela Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Specialerbjudanden Små företag Support Programvara Programvara Windows-appar AI OneDrive Outlook Byta från Skype till Teams OneNote Microsoft Teams Datorer och enheter Datorer och enheter Köp Xbox Tillbehör Underhållning Underhållning Xbox Game Pass Ultimate Xbox och spel PC-spel Företag Företag Microsoft Security Azure Dynamics 365 Microsoft 365 för företag Microsoft Industry Microsoft Power Platform Windows 365 Utvecklare och IT Utvecklare och IT Microsoft Developer Microsoft Learn Stöd för AI-marknadsappar Microsoft Tech Community Microsoft Marketplace Visual Studio Marketplace Rewards Andra Andra Microsoft Rewards Kostnadsfria nedladdningar och säkerhet Utbildning Presentkort Licensiering Visa webbplatskarta Sök Sök efter hjälp Inga resultat Avbryt Logga in Logga in med Microsoft Logga in eller skapa ett konto. Hej, Välj ett annat konto. Du har flera konton Välj det konto som du vill logga in med. Relaterade ämnen Windows säkerhet och sekretess Översikt Översikt för säkerhet och sekretess Windows-säkerhet Få hjälp med Windows-säkerhet Skydda dig med Windows-säkerhet Innan du återanvänder, säljer eller ger bort din Xbox- eller Windows-dator Ta bort skadlig kod från din Windows-dator Windows-säkerhet Få hjälp med Windows-säkerhet Visa och ta bort webbhistorik i Microsoft Edge Ta bort och hantera cookies Ta bort värdefullt innehåll på ett säkert sätt när du installerar om Windows Hitta och låsa en borttappad Windows-enhet Windows-sekretess Få hjälp med Windows-sekretess Sekretessinställningar i Windows som används av appar Visa dina data på sekretesspanelen Hantera cookies i Microsoft Edge: Visa, tillåta, blockera, ta bort och använda Gäller för Windows 10 Windows 11 Microsoft Edge Cookies är små data som lagras på din enhet av webbplatser du besöker. De har olika syften, till exempel att komma ihåg inloggningsuppgifter, webbplatsinställningar och spåra användarbeteende. Men du kanske vill ta bort cookies av sekretessskäl eller för att lösa problem med surfning. Den här artikeln innehåller instruktioner om hur du: Visa alla cookies Tillåt alla cookies Tillåt cookies från en viss webbplats Blockera cookies från tredje part Med Blockera alla cookies Blockera cookies från en viss webbplats Ta bort alla cookies Ta bort cookies från en särskild webbplats Ta bort cookies varje gång du stänger webbläsaren Använda cookies för att förinstallera sidan för snabbare surfning Visa alla cookies Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och klicka sedan på Visa alla cookies och webbplatsdata för att visa alla lagrade cookies och relaterad webbplatsinformation. Tillåt alla cookies Genom att tillåta cookies kan webbplatser spara och hämta data i din webbläsare, vilket kan förbättra din surfupplevelse genom att komma ihåg dina inställningar och inloggningsinformation. Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och aktivera växlingsknappen Tillåt att webbplatser sparar och läser cookie-data (rekommenderas) för att tillåta alla cookies. Tillåt cookies från en viss webbplats Genom att tillåta cookies kan webbplatser spara och hämta data i din webbläsare, vilket kan förbättra din surfupplevelse genom att komma ihåg dina inställningar och inloggningsinformation. Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och gå till Tillåts att spara cookies. Välj Lägg till webbplats om du vill tillåta cookies per webbplats genom att ange webbplatsens URL. Blockera cookies från tredje part Om du inte vill att webbplatser från tredje part ska lagra cookies på datorn kan du blockera cookies. Men om du gör det kan vissa sidor hindras från att visas korrekt, eller så kan du få ett meddelande från en webbplats om att du måste tillåta cookies för att kunna visa den webbplatsen. Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och aktivera växlingsknappen Blockera cookies från tredje part. Med Blockera alla cookies Om du inte vill att webbplatser från tredje part ska lagra cookies på datorn kan du blockera cookies. Men om du gör det kan vissa sidor hindras från att visas korrekt, eller så kan du få ett meddelande från en webbplats om att du måste tillåta cookies för att kunna visa den webbplatsen. Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och inaktivera Tillåt att webbplatser sparar och läser cookie-data (rekommenderas) för att blockera alla cookies. Blockera cookies från en viss webbplats Med Microsoft Edge kan du blockera cookies från en viss webbplats, men om du gör det kan vissa sidor inte visas korrekt, eller så kan du få ett meddelande från en webbplats om att du måste tillåta cookies för att kunna visa webbplatsen. Så här blockerar du cookies från en viss webbplats: Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och gå till Tillåts inte att spara och läsa cookies . Välj Lägg till webbplats om du vill blockera cookies per webbplats genom att ange webbplatsens URL. Ta bort alla cookies Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Radera webbdata och välj sedan Välj vad du vill radera bredvid Radera webbdata nu . Välj ett tidsintervall under Tidsintervall . Välj Cookies och andra webbplatsdata och sedan Rensa nu . Obs!: Du kan också ta bort cookies genom att trycka på CTRL + SKIFT + DELETE tillsammans och sedan fortsätta med steg 4 och 5. Alla dina cookies och andra webbplatsdata tas nu bort för det tidsintervall som du har valt. Detta loggar ut dig från de flesta webbplatser. Ta bort cookies från en särskild webbplats Öppna webbläsaren Edge, välj Inställningar med mera > Inställningar > Sekretess, sökning och tjänster . Välj Cookies , klicka sedan på Visa alla cookies och webbplatsdata och sök efter webbplatsen vars cookies du vill ta bort. Välj nedåtpilen till höger om webbplatsen vars cookies du vill ta bort och välj Ta bort . Cookies för den valda webbplatsen tas nu bort. Upprepa det här steget för webbplatser vars cookies du vill ta bort. Ta bort cookies varje gång du stänger webbläsaren Öppna webbläsaren Edge, välj Inställningar med mera > Inställningar > Sekretess, sökning och tjänster . Välj Radera webbdata och välj sedan Välj vad du vill radera varje gång du stänger webbläsaren . Aktivera Cookies och andra webbplatsdata . När den här funktionen är aktiverad tas alla cookies och andra webbplatsdata bort varje gång du stänger webbläsaren Edge. Detta loggar ut dig från de flesta webbplatser. Använda cookies för att förinstallera sidan för snabbare surfning Öppna webbläsaren Edge, välj Inställningar med mera i det övre högra hörnet i webbläsarfönstret. Välj Inställningar > Sekretess, sökning och tjänster . Välj Cookies och aktivera växlingsknappen förinläsning av sidor för snabbare surfning och sökning. PRENUMERERA PÅ RSS-FEEDS Behöver du mer hjälp? Vill du ha fler alternativ? Upptäck Community Kontakta oss Utforska prenumerationsförmåner, bläddra bland utbildningskurser, lär dig hur du skyddar din enhet med mera. Fördelar med en Microsoft 365-prenumeration Utbildning för Microsoft 365 Microsoft-säkerhet Hjälpmedelscenter Communities hjälper dig att ställa och svara på frågor, ge feedback och få råd från experter med rika kunskaper. Fråga Microsoft-communityn Microsoft Tech Community Windows Insiders Microsoft 365 Insiders Hitta lösningar på vanliga problem eller få hjälp från en supportagent. Onlinesupport Hade du nytta av den här informationen? Ja Nej Tack! Vill du ha mer feedback för Microsoft? Kan du hjälpa oss att bli bättre? (Skicka feedback till Microsoft så att vi kan hjälpa dig.) Hur nöjd är du med språkkvaliteten? Vad påverkade din upplevelse? Löste mitt problem Tydliga instruktioner Lätt att följa Ingen jargong Bilder hjälpte Översättningskvalitet Matchade inte min skärm Felaktiga instruktioner För tekniskt Det finns inte tillräckligt med information Det finns inte tillräckligt med bilder Översättningskvalitet Har du ytterligare feedback? (Valfritt) Skicka feedback Genom att trycka på skicka, kommer din feedback att användas för att förbättra Microsofts produkter och tjänster. IT-administratören kan samla in denna data. Sekretesspolicy. Tack för din feedback! × Nyheter Surface Pro Surface Laptop Surface Laptop Studio 2 Copilot för organisationer Copilot för användning privat Microsoft 365 Utforska produkter från Microsoft Windows 11-appar Microsoft Store Kontoprofil Download Center Microsoft Store-support Returer Orderspårning Återvinning Kommersiella garantier Utbildning Microsoft Education Enheter för utbildning Microsoft Teams för utbildning Microsoft 365 Education Office Education Utbildning och utveckling för lärare Erbjudanden för elever och föräldrar Azure för studenter Företag Microsoft Security Azure Dynamics 365 Microsoft 365 Microsoft 365 Copilot Microsoft Teams Små företag Utvecklare och IT Microsoft Developer Microsoft Learn Stöd för AI-marknadsappar Microsoft Tech Community Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Företag Karriärmöjligheter Om Microsoft Företagsnyheter Sekretess på Microsoft Investerare Hållbarhet Svenska (Sverige) Ikon för inaktivering av sekretessval Dina sekretessval Ikon för inaktivering av sekretessval Dina sekretessval Sekretess för konsumenthälsa Kontakta Microsoft Integritet Hantera cookies Juridiskt meddelande Varumärken Om våra annonser EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-generated-response-examples | Funciones de ejemplo de Lambda@Edge - Amazon CloudFront Funciones de ejemplo de Lambda@Edge - Amazon CloudFront Documentación Amazon CloudFront Guía para desarrolladores Ejemplos generales Generación de respuestas: ejemplos Cadenas de consulta: ejemplos Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos Selección de origen dinámico basada en contenido: ejemplos Actualización de estados de error: ejemplos Acceso al cuerpo de la solicitud: ejemplos Funciones de ejemplo de Lambda@Edge Consulte los ejemplos siguientes para usar funciones de Lambda con Amazon CloudFront. nota Si elige el tiempo de ejecución Node.js 18 o una versión posterior para la función Lambda@Edge, se creará automáticamente un archivo index.mjs . Para usar los siguientes ejemplos de código, cambie el nombre del archivo index.mjs a index.js . Temas Ejemplos generales Generación de respuestas: ejemplos Cadenas de consulta: ejemplos Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos Selección de origen dinámico basada en contenido: ejemplos Actualización de estados de error: ejemplos Acceso al cuerpo de la solicitud: ejemplos Ejemplos generales Los ejemplos siguientes muestran formas habituales de usar Lambda@Edge en CloudFront. Temas Ejemplo: prueba A/B Ejemplo: Sobrescritura de un encabezado de respuesta Ejemplo: prueba A/B Puede utilizar el siguiente ejemplo para probar dos versiones diferentes de una imagen sin crear redirecciones ni cambiar la dirección URL. En este ejemplo se leen las cookies de la solicitud del lector y se modifica la URL de la solicitud en consecuencia. Si el espectador no envía una cookie con uno de los valores esperados, el ejemplo asigna aleatoriamente al espectador una de las URL. Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; if (request.uri !== '/experiment-pixel.jpg') { // do not process if this is not an A-B test request callback(null, request); return; } const cookieExperimentA = 'X-Experiment-Name=A'; const cookieExperimentB = 'X-Experiment-Name=B'; const pathExperimentA = '/experiment-group/control-pixel.jpg'; const pathExperimentB = '/experiment-group/treatment-pixel.jpg'; /* * Lambda at the Edge headers are array objects. * * Client may send multiple Cookie headers, i.e.: * > GET /viewerRes/test HTTP/1.1 * > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 * > Cookie: First=1; Second=2 * > Cookie: ClientCode=abc * > Host: example.com * * You can access the first Cookie header at headers["cookie"][0].value * and the second at headers["cookie"][1].value. * * Header values are not parsed. In the example above, * headers["cookie"][0].value is equal to "First=1; Second=2" */ let experimentUri; if (headers.cookie) { for (let i = 0; i < headers.cookie.length; i++) { if (headers.cookie[i].value.indexOf(cookieExperimentA) >= 0) { console.log('Experiment A cookie found'); experimentUri = pathExperimentA; break; } else if (headers.cookie[i].value.indexOf(cookieExperimentB) >= 0) { console.log('Experiment B cookie found'); experimentUri = pathExperimentB; break; } } } if (!experimentUri) { console.log('Experiment cookie has not been found. Throwing dice...'); if (Math.random() < 0.75) { experimentUri = pathExperimentA; } else { experimentUri = pathExperimentB; } } request.uri = experimentUri; console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python import json import random def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] if request['uri'] != '/experiment-pixel.jpg': # Not an A/B Test return request cookieExperimentA, cookieExperimentB = 'X-Experiment-Name=A', 'X-Experiment-Name=B' pathExperimentA, pathExperimentB = '/experiment-group/control-pixel.jpg', '/experiment-group/treatment-pixel.jpg' ''' Lambda at the Edge headers are array objects. Client may send multiple cookie headers. For example: > GET /viewerRes/test HTTP/1.1 > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 > Cookie: First=1; Second=2 > Cookie: ClientCode=abc > Host: example.com You can access the first Cookie header at headers["cookie"][0].value and the second at headers["cookie"][1].value. Header values are not parsed. In the example above, headers["cookie"][0].value is equal to "First=1; Second=2" ''' experimentUri = "" for cookie in headers.get('cookie', []): if cookieExperimentA in cookie['value']: print("Experiment A cookie found") experimentUri = pathExperimentA break elif cookieExperimentB in cookie['value']: print("Experiment B cookie found") experimentUri = pathExperimentB break if not experimentUri: print("Experiment cookie has not been found. Throwing dice...") if random.random() < 0.75: experimentUri = pathExperimentA else: experimentUri = pathExperimentB request['uri'] = experimentUri print(f"Request uri set to { experimentUri}") return request Ejemplo: Sobrescritura de un encabezado de respuesta En el ejemplo siguiente, se muestra cómo cambiar el valor de un encabezado de respuesta según el valor de otro encabezado. Node.js export const handler = async (event) => { const response = event.Records[0].cf.response; const headers = response.headers; const headerNameSrc = 'X-Amz-Meta-Last-Modified'; const headerNameDst = 'Last-Modified'; if (headers[headerNameSrc.toLowerCase()]) { headers[headerNameDst.toLowerCase()] = [ { key: headerNameDst, value: headers[headerNameSrc.toLowerCase()][0].value, }]; console.log(`Response header "$ { headerNameDst}" was set to ` + `"$ { headers[headerNameDst.toLowerCase()][0].value}"`); } return response; }; Python import json def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] headers = response['headers'] header_name_src = 'X-Amz-Meta-Last-Modified' header_name_dst = 'Last-Modified' if headers.get(header_name_src.lower()): headers[header_name_dst.lower()] = [ { 'key': header_name_dst, 'value': headers[header_name_src.lower()][0]['value'] }] print(f'Response header " { header_name_dst}" was set to ' f'" { headers[header_name_dst.lower()][0]["value"]}"') return response Generación de respuestas: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para generar respuestas. Temas Ejemplo: Envío de contenido estático (respuesta generada) Ejemplo: Generación de un redireccionamiento HTTP (respuesta generada) Ejemplo: Envío de contenido estático (respuesta generada) En el siguiente ejemplo se muestra cómo utilizar una función de Lambda para enviar contenido de sitio web estático, lo que reduce la carga en el servidor de origen y la latencia total. nota Puede generar respuestas HTTP para los eventos de solicitud del espectador y al origen. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . También puede sustituir o quitar el cuerpo de la respuesta HTTP en eventos de respuesta de origen. Para obtener más información, consulte Actualización de respuestas HTTP en desencadenadores de respuesta de origen . Node.js 'use strict'; const content = ` <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> `; exports.handler = (event, context, callback) => { /* * Generate HTTP OK response using 200 status code with HTML body. */ const response = { status: '200', statusDescription: 'OK', headers: { 'cache-control': [ { key: 'Cache-Control', value: 'max-age=100' }], 'content-type': [ { key: 'Content-Type', value: 'text/html' }] }, body: content, }; callback(null, response); }; Python import json CONTENT = """ <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> """ def lambda_handler(event, context): # Generate HTTP OK response using 200 status code with HTML body. response = { 'status': '200', 'statusDescription': 'OK', 'headers': { 'cache-control': [ { 'key': 'Cache-Control', 'value': 'max-age=100' } ], "content-type": [ { 'key': 'Content-Type', 'value': 'text/html' } ] }, 'body': CONTENT } return response Ejemplo: Generación de un redireccionamiento HTTP (respuesta generada) En el siguiente ejemplo se muestra cómo generar una redirección HTTP. nota Puede generar respuestas HTTP para los eventos de solicitud del espectador y al origen. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . Node.js 'use strict'; exports.handler = (event, context, callback) => { /* * Generate HTTP redirect response with 302 status code and Location header. */ const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html', }], }, }; callback(null, response); }; Python def lambda_handler(event, context): # Generate HTTP redirect response with 302 status code and Location header. response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html' }] } } return response Cadenas de consulta: ejemplos En los ejemplos siguientes se muestran formas de usar Lambda@Edge con cadenas de consulta. Temas Ejemplo: Adición de un encabezado en función de un parámetro de la cadena de consulta Ejemplo: Normalización de parámetros de cadenas de consulta para mejorar la tasa de aciertos de caché Ejemplo: Redireccionamiento de los usuarios no autenticados a una página de inicio de sesión Ejemplo: Adición de un encabezado en función de un parámetro de la cadena de consulta El siguiente ejemplo muestra cómo obtener el par clave-valor de un parámetro de la cadena de consulta y, a continuación, añadir un encabezado en función de dichos valores. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When a request contains a query string key-value pair but the origin server * expects the value in a header, you can use this Lambda function to * convert the key-value pair to a header. Here's what the function does: * 1. Parses the query string and gets the key-value pair. * 2. Adds a header to the request using the key-value pair that the function got in step 1. */ /* Parse request querystring to get javascript object */ const params = querystring.parse(request.querystring); /* Move auth param from querystring to headers */ const headerName = 'Auth-Header'; request.headers[headerName.toLowerCase()] = [ { key: headerName, value: params.auth }]; delete params.auth; /* Update request querystring */ request.querystring = querystring.stringify(params); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When a request contains a query string key-value pair but the origin server expects the value in a header, you can use this Lambda function to convert the key-value pair to a header. Here's what the function does: 1. Parses the query string and gets the key-value pair. 2. Adds a header to the request using the key-value pair that the function got in step 1. ''' # Parse request querystring to get dictionary/json params = { k : v[0] for k, v in parse_qs(request['querystring']).items()} # Move auth param from querystring to headers headerName = 'Auth-Header' request['headers'][headerName.lower()] = [ { 'key': headerName, 'value': params['auth']}] del params['auth'] # Update request querystring request['querystring'] = urlencode(params) return request Ejemplo: Normalización de parámetros de cadenas de consulta para mejorar la tasa de aciertos de caché El siguiente ejemplo muestra cómo mejorar la tasa de acceso a la caché haciendo los siguientes cambios en las cadenas de consulta antes de que CloudFront reenvíe las solicitudes a su origen: Alfabetizar los pares de clave-valor por el nombre del parámetro. Cambiar a minúsculas el modelo de mayúsculas y minúsculas de los pares de clave-valor. Para obtener más información, consulte Almacenamiento en caché de contenido en función de parámetros de cadenas de consulta . Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When you configure a distribution to forward query strings to the origin and * to cache based on an allowlist of query string parameters, we recommend * the following to improve the cache-hit ratio: * - Always list parameters in the same order. * - Use the same case for parameter names and values. * * This function normalizes query strings so that parameter names and values * are lowercase and parameter names are in alphabetical order. * * For more information, see: * https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html */ console.log('Query String: ', request.querystring); /* Parse request query string to get javascript object */ const params = querystring.parse(request.querystring.toLowerCase()); const sortedParams = { }; /* Sort param keys */ Object.keys(params).sort().forEach(key => { sortedParams[key] = params[key]; }); /* Update request querystring with normalized */ request.querystring = querystring.stringify(sortedParams); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When you configure a distribution to forward query strings to the origin and to cache based on an allowlist of query string parameters, we recommend the following to improve the cache-hit ratio: Always list parameters in the same order. - Use the same case for parameter names and values. This function normalizes query strings so that parameter names and values are lowercase and parameter names are in alphabetical order. For more information, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html ''' print("Query string: ", request["querystring"]) # Parse request query string to get js object params = { k : v[0] for k, v in parse_qs(request['querystring'].lower()).items()} # Sort param keys sortedParams = sorted(params.items(), key=lambda x: x[0]) # Update request querystring with normalized request['querystring'] = urlencode(sortedParams) return request Ejemplo: Redireccionamiento de los usuarios no autenticados a una página de inicio de sesión El siguiente ejemplo muestra cómo redirigir a los usuarios una página de inicio de sesión si no ha introducido sus credenciales. Node.js 'use strict'; function parseCookies(headers) { const parsedCookie = { }; if (headers.cookie) { headers.cookie[0].value.split(';').forEach((cookie) => { if (cookie) { const parts = cookie.split('='); parsedCookie[parts[0].trim()] = parts[1].trim(); } }); } return parsedCookie; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* Check for session-id in request cookie in viewer-request event, * if session-id is absent, redirect the user to sign in page with original * request sent as redirect_url in query params. */ /* Check for session-id in cookie, if present then proceed with request */ const parsedCookies = parseCookies(headers); if (parsedCookies && parsedCookies['session-id']) { callback(null, request); return; } /* URI encode the original request to be sent as redirect_url in query params */ const encodedRedirectUrl = encodeURIComponent(`https://$ { headers.host[0].value}$ { request.uri}?$ { request.querystring}`); const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: `https://www.example.com/signin?redirect_url=$ { encodedRedirectUrl}`, }], }, }; callback(null, response); }; Python import urllib def parseCookies(headers): parsedCookie = { } if headers.get('cookie'): for cookie in headers['cookie'][0]['value'].split(';'): if cookie: parts = cookie.split('=') parsedCookie[parts[0].strip()] = parts[1].strip() return parsedCookie def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Check for session-id in request cookie in viewer-request event, if session-id is absent, redirect the user to sign in page with original request sent as redirect_url in query params. ''' # Check for session-id in cookie, if present, then proceed with request parsedCookies = parseCookies(headers) if parsedCookies and parsedCookies['session-id']: return request # URI encode the original request to be sent as redirect_url in query params redirectUrl = "https://%s%s?%s" % (headers['host'][0]['value'], request['uri'], request['querystring']) encodedRedirectUrl = urllib.parse.quote_plus(redirectUrl.encode('utf-8')) response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://www.example.com/signin?redirect_url=%s' % encodedRedirectUrl }] } } return response Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para personalizar el comportamiento en función de la ubicación o el tipo de dispositivo que usa el lector. Temas Ejemplo: Redireccionamiento de solicitudes de espectadores a una URL específica de un país Ejemplo: Envío de distintas versiones de un objeto en función del dispositivo Ejemplo: Redireccionamiento de solicitudes de espectadores a una URL específica de un país El siguiente ejemplo muestra cómo generar una respuesta de redireccionamiento HTTP con una URL específica del país y devolver la respuesta al espectador. Esto resulta útil cuando se quiere proporcionar respuestas específicas del país. Por ejemplo: Si tiene subdominios específicos de un país, como us.ejemplo.com y tw.ejemplo.com, puede generar una respuesta de redireccionamiento cuando un espectador solicite ejemplo.com. Si está haciendo streaming de video, pero no tiene derechos para transmitir el contenido en un país determinado, puede redirigir a los usuarios de dicho país a una página en la que se explica por qué no pueden ver el video. Tenga en cuenta lo siguiente: Debe configurar la distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . CloudFront agrega el encabezado CloudFront-Viewer-Country después del evento de solicitud del lector. Para utilizar este ejemplo, debe crear un activador para el evento de solicitud al origen. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Based on the value of the CloudFront-Viewer-Country header, generate an * HTTP status code 302 (Redirect) response, and return a country-specific * URL in the Location header. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ let url = 'https://example.com/'; if (headers['cloudfront-viewer-country']) { const countryCode = headers['cloudfront-viewer-country'][0].value; if (countryCode === 'TW') { url = 'https://tw.example.com/'; } else if (countryCode === 'US') { url = 'https://us.example.com/'; } } const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: url, }], }, }; callback(null, response); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Based on the value of the CloudFront-Viewer-Country header, generate an HTTP status code 302 (Redirect) response, and return a country-specific URL in the Location header. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' url = 'https://example.com/' viewerCountry = headers.get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'TW': url = 'https://tw.example.com/' elif countryCode == 'US': url = 'https://us.example.com/' response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': url }] } } return response Ejemplo: Envío de distintas versiones de un objeto en función del dispositivo El siguiente ejemplo muestra cómo ofrecer distintas versiones de un objeto en función del tipo de dispositivo que el usuario está utilizando; por ejemplo, un dispositivo móvil o una tablet. Tenga en cuenta lo siguiente: Debe configurar la distribución para almacenar en la caché en función de los encabezados CloudFront-Is-*-Viewer . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . CloudFront agrega los encabezados CloudFront-Is-*-Viewer después del evento de solicitud del lector. Para utilizar este ejemplo, debe crear un activador para el evento de solicitud al origen. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Serve different versions of an object based on the device type. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Is-*-Viewer headers. For more information, see * the following documentation: * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type * 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const desktopPath = '/desktop'; const mobilePath = '/mobile'; const tabletPath = '/tablet'; const smarttvPath = '/smarttv'; if (headers['cloudfront-is-desktop-viewer'] && headers['cloudfront-is-desktop-viewer'][0].value === 'true') { request.uri = desktopPath + request.uri; } else if (headers['cloudfront-is-mobile-viewer'] && headers['cloudfront-is-mobile-viewer'][0].value === 'true') { request.uri = mobilePath + request.uri; } else if (headers['cloudfront-is-tablet-viewer'] && headers['cloudfront-is-tablet-viewer'][0].value === 'true') { request.uri = tabletPath + request.uri; } else if (headers['cloudfront-is-smarttv-viewer'] && headers['cloudfront-is-smarttv-viewer'][0].value === 'true') { request.uri = smarttvPath + request.uri; } console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Serve different versions of an object based on the device type. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Is-*-Viewer headers. For more information, see the following documentation: https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' desktopPath = '/desktop'; mobilePath = '/mobile'; tabletPath = '/tablet'; smarttvPath = '/smarttv'; if 'cloudfront-is-desktop-viewer' in headers and headers['cloudfront-is-desktop-viewer'][0]['value'] == 'true': request['uri'] = desktopPath + request['uri'] elif 'cloudfront-is-mobile-viewer' in headers and headers['cloudfront-is-mobile-viewer'][0]['value'] == 'true': request['uri'] = mobilePath + request['uri'] elif 'cloudfront-is-tablet-viewer' in headers and headers['cloudfront-is-tablet-viewer'][0]['value'] == 'true': request['uri'] = tabletPath + request['uri'] elif 'cloudfront-is-smarttv-viewer' in headers and headers['cloudfront-is-smarttv-viewer'][0]['value'] == 'true': request['uri'] = smarttvPath + request['uri'] print("Request uri set to %s" % request['uri']) return request Selección de origen dinámico basada en contenido: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para el direccionamiento a diferentes orígenes en función de la información de la solicitud. Temas Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen personalizado a un origen de Amazon S3 Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar la región de origen de Amazon S3 Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen de Amazon S3 a un origen personalizado Ejemplo: Uso de un desencadenador de solicitud al origen para transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar el nombre del dominio de origen en función del encabezado de país Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen personalizado a un origen de Amazon S3 Esta función demuestra cómo utilizar un desencadenador de solicitud al origen para cambiar desde un origen personalizado a un origen de Amazon S3 desde el que recuperar el contenido, en función de las propiedades de la solicitud. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if S3 origin should be used, and * if true, sets S3 origin properties. */ const params = querystring.parse(request.querystring); if (params['useS3Origin']) { if (params['useS3Origin'] === 'true') { const s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com'; /* Set S3 origin fields */ request.origin = { s3: { domainName: s3DomainName, region: '', authMethod: 'origin-access-identity', path: '', customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: s3DomainName}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' Reads query string to check if S3 origin should be used, and if true, sets S3 origin properties ''' params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useS3Origin') == 'true': s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com' # Set S3 origin fields request['origin'] = { 's3': { 'domainName': s3DomainName, 'region': '', 'authMethod': 'origin-access-identity', 'path': '', 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': s3DomainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar la región de origen de Amazon S3 Esta función demuestra cómo utilizar un desencadenador de solicitud al origen para cambiar el origen de Amazon S3 desde el que se recupera el contenido, en función de las propiedades de la solicitud. En este ejemplo, utilizamos el valor del encabezado CloudFront-Viewer-Country para actualizar el nombre de dominio del bucket de S3 por un bucket de una región que está más cerca del lector. Esto puede resultar útil de varias maneras: Reduce las latencias cuando la región especificada está más cerca del país del lector. Proporciona soberanía de los datos, al asegurarse de que los datos se distribuyen desde un origen que está en el país del que provino la solicitud. Para utilizar este ejemplo, debe hacer lo siguiente: Configure la distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . Crear un disparador para esta función en el evento de solicitud al origen. CloudFront agrega el encabezado CloudFront-Viewer-Country después del evento de solicitud del lector; por lo tanto, para utilizar este ejemplo, debe asegurarse de que la función ejecuta una solicitud de origen. nota El siguiente código de ejemplo usa la misma identidad de acceso de origen (OAI) para todos los buckets de S3 que usa para el origen. Para obtener más información, consulte Identidad de acceso de origen . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * This blueprint demonstrates how an origin-request trigger can be used to * change the origin from which the content is fetched, based on request properties. * In this example, we use the value of the CloudFront-Viewer-Country header * to update the S3 bucket domain name to a bucket in a Region that is closer to * the viewer. * * This can be useful in several ways: * 1) Reduces latencies when the Region specified is nearer to the viewer's * country. * 2) Provides data sovereignty by making sure that data is served from an * origin that's in the same country that the request came from. * * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' }; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; const region = countryToRegion[countryCode]; /** * If the viewer's country is not in the list you specify, the request * goes to the default S3 bucket you've configured. */ if (region) { /** * If you've set up OAI, the bucket policy in the destination bucket * should allow the OAI GetObject operation, as configured by default * for an S3 origin with OAI. Another requirement with OAI is to provide * the Region so it can be used for the SIGV4 signature. Otherwise, the * Region is not required. */ request.origin.s3.region = region; const domainName = `amzn-s3-demo-bucket-in-$ { region}.s3.$ { region}.amazonaws.com`; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName }]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' This blueprint demonstrates how an origin-request trigger can be used to change the origin from which the content is fetched, based on request properties. In this example, we use the value of the CloudFront-Viewer-Country header to update the S3 bucket domain name to a bucket in a Region that is closer to the viewer. This can be useful in several ways: 1) Reduces latencies when the Region specified is nearer to the viewer's country. 2) Provides data sovereignty by making sure that data is served from an origin that's in the same country that the request came from. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' } viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] region = countryToRegion.get(countryCode) # If the viewer's country in not in the list you specify, the request # goes to the default S3 bucket you've configured if region: ''' If you've set up OAI, the bucket policy in the destination bucket should allow the OAI GetObject operation, as configured by default for an S3 origin with OAI. Another requirement with OAI is to provide the Region so it can be used for the SIGV4 signature. Otherwise, the Region is not required. ''' request['origin']['s3']['region'] = region domainName = 'amzn-s3-demo-bucket-in- { 0}.s3. { 0}.amazonaws.com'.format(region) request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen de Amazon S3 a un origen personalizado Esta función demuestra cómo utilizar un disparador de solicitud al origen para cambiar el origen personalizado desde el que se recupera el contenido, en función de las propiedades de la solicitud. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if custom origin should be used, and * if true, sets custom origin properties. */ const params = querystring.parse(request.querystring); if (params['useCustomOrigin']) { if (params['useCustomOrigin'] === 'true') { /* Set custom origin fields*/ request.origin = { custom: { domainName: 'www.example.com', port: 443, protocol: 'https', path: '', sslProtocols: ['TLSv1', 'TLSv1.1'], readTimeout: 5, keepaliveTimeout: 5, customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: 'www.example.com'}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] # Reads query string to check if custom origin should be used, and # if true, sets custom origin properties params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useCustomOrigin') == 'true': # Set custom origin fields request['origin'] = { 'custom': { 'domainName': 'www.example.com', 'port': 443, 'protocol': 'https', 'path': '', 'sslProtocols': ['TLSv1', 'TLSv1.1'], 'readTimeout': 5, 'keepaliveTimeout': 5, 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': 'www.example.com'}] return request Ejemplo: Uso de un desencadenador de solicitud al origen para transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro Esta función demuestra cómo transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro de forma controlada. Node.js 'use strict'; function getRandomInt(min, max) { /* Random number is inclusive of min and max*/ return Math.floor(Math.random() * (max - min + 1)) + min; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const BLUE_TRAFFIC_PERCENTAGE = 80; /** * This Lambda function demonstrates how to gradually transfer traffic from * one S3 bucket to another in a controlled way. * We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from * 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic * is re-directed to blue-bucket. If not, the default bucket that we've configured * is used. */ const randomNumber = getRandomInt(1, 100); if (randomNumber <= BLUE_TRAFFIC_PERCENTAGE) { const domainName = 'blue-bucket.s3.amazonaws.com'; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } callback(null, request); }; Python import math import random def getRandomInt(min, max): # Random number is inclusive of min and max return math.floor(random.random() * (max - min + 1)) + min def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] BLUE_TRAFFIC_PERCENTAGE = 80 ''' This Lambda function demonstrates how to gradually transfer traffic from one S3 bucket to another in a controlled way. We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic is re-directed to blue-bucket. If not, the default bucket that we've configured is used. ''' randomNumber = getRandomInt(1, 100) if randomNumber <= BLUE_TRAFFIC_PERCENTAGE: domainName = 'blue-bucket.s3.amazonaws.com' request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar el nombre del dominio de origen en función del encabezado de país Esta función demuestra cómo cambiar el nombre del dominio de origen en función del encabezado CloudFront-Viewer-Country , de forma que el contenido se distribuya desde un origen más cercano al país del lector. La implementación de esta funcionalidad para su distribución puede tener ventajas como las siguientes: Reducir las latencias cuando la región especificada está más cerca del país del lector Proporcionar soberanía de los datos, al asegurarse de que los datos se distribuyen desde un origen que está en el país del que provino la solicitud Tenga en cuenta que para habilitar esta funcionalidad, debe configurar su distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; if (countryCode === 'GB' || countryCode === 'DE' || countryCode === 'IE' ) { const domainName = 'eu.example.com'; request.origin.custom.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'GB' or countryCode == 'DE' or countryCode == 'IE': domainName = 'eu.example.com' request['origin']['custom']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Actualización de estados de error: ejemplos En los ejemplos siguientes se proporciona orientación acerca de cómo puede usar Lambda@Edge para cambiar el estado de error que se devuelve a los usuarios. Temas Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 Esta función demuestra cómo actualizar el estado de la respuesta a 200 y generar un cuerpo con contenido estático para devolverlo al espectador en la siguiente situación: La función se desencadena en una respuesta del origen. El estado de la respuesta del servidor de origen es un código de estado de error (4xx o 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; /** * This function updates the response status to 200 and generates static * body content to return to the viewer in the following scenario: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { response.status = 200; response.statusDescription = 'OK'; response.body = 'Body generation example'; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] ''' This function updates the response status to 200 and generates static body content to return to the viewer in the following scenario: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: response['status'] = 200 response['statusDescription'] = 'OK' response['body'] = 'Body generation example' return response Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 Esta función demuestra cómo actualizar el código de estado HTTP a 302 para la redirección a otra ruta (comportamiento de la caché) en la que se ha configurado un origen diferente. Tenga en cuenta lo siguiente: La función se desencadena en una respuesta del origen. El estado de la respuesta del servidor de origen es un código de estado de error (4xx o 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; const request = event.Records[0].cf.request; /** * This function updates the HTTP status code in the response to 302, to redirect to another * path (cache behavior) that has a different origin configured. Note the following: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { const redirect_path = `/plan-b/path?$ { request.querystring}`; response.status = 302; response.statusDescription = 'Found'; /* Drop the body, as it is not required for redirects */ response.body = ''; response.headers['location'] = [ { key: 'Location', value: redirect_path }]; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] request = event['Records'][0]['cf']['request'] ''' This function updates the HTTP status code in the response to 302, to redirect to another path (cache behavior) that has a different origin configured. Note the following: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: redirect_path = '/plan-b/path?%s' % request['querystring'] response['status'] = 302 response['statusDescription'] = 'Found' # Drop the body as it is not required for redirects response['body'] = '' response['headers']['location'] = [ { 'key': 'Location', 'value': redirect_path}] return response Acceso al cuerpo de la solicitud: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para trabajar con las solicitudes POST. nota Para utilizar estos ejemplos, debe habilitar la opción incluir cuerpo en la asociación de funciones Lambda de la distribución. No está habilitada de forma predeterminada. Para habilitar esta configuración en la consola de CloudFront, seleccione la casilla de verificación Incluir cuerpo en la Asociación de funciones Lambda . Para habilitar esta configuración en la API de CloudFront o con CloudFormation, establezca el campo IncludeBody en true en LambdaFunctionAssociation . Temas Ejemplo: Uso de un desencadenador de solicitud para leer un formulario HTML Ejemplo: Uso de un desencadenador de solicitud para modificar un formulario HTML Ejemplo: Uso de un desencadenador de solicitud para leer un formulario HTML Esta función ilustra cómo puede procesar el cuerpo de una solicitud POST generada por un formulario HTML (formulario web), como por ejemplo un formulario tipo "póngase en contacto con nosotros". Por ejemplo, es posible que tenga un formulario HTML como el siguiente: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> Para la función de ejemplo que se indica a continuación, la función se debe desencadenar en una solicitud al origen o del lector de CloudFront. Node.js 'use strict'; const querystring = require('querystring'); /** * This function demonstrates how you can read the body of a POST request * generated by an HTML form (web form). The function is triggered in a * CloudFront viewer request or origin request event type. */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.method === 'POST') { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send the data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we only log the form fields here. * You can put your custom logic here. For example, you can store the * fields in a database, such as Amazon DynamoDB, and generate a response * right from your Lambda@Edge function. */ for (let param in params) { console.log(`For "$ { param}" user submitted "$ { params[param]}".\n`); } } return callback(null, request); }; Python import base64 from urllib.parse import parse_qs ''' Say there is a POST request body generated by an HTML such as: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> ''' ''' This function demonstrates how you can read the body of a POST request generated by an HTML form (web form). The function is triggered in a CloudFront viewer request or origin request event type. ''' def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': # HTTP body is always passed as base64-encoded string. Decode it body = base64.b64decode(request['body']['data']) # HTML forms send the data in query string format. Parse it params = { k: v[0] for k, v in parse_qs(body).items()} ''' For demonstration purposes, we only log the form fields here. You can put your custom logic here. For example, you can store the fields in a database, such as Amazon DynamoDB, and generate a response right from your Lambda@Edge function. ''' for key, value in params.items(): print("For %s use submitted %s" % (key, value)) return request Ejemplo: Uso de un desencadenador de solicitud para modificar un formulario HTML Esta función ilustra cómo puede modificar el cuerpo de una solicitud POST generada por un formulario HTML (formulario web). La función se activa en una solicitud al origen o del lector de CloudFront. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { var request = event.Records[0].cf.request; if (request.method === 'POST') { /* Request body is being replaced. To do this, update the following /* three fields: * 1) body.action to 'replace' * 2) body.encoding to the encoding of the new data. * * Set to one of the following values: * * text - denotes that the generated body is in text format. * Lambda@Edge will propagate this as is. * base64 - denotes that the generated body is base64 encoded. * Lambda@Edge will base64 decode the data before sending * it to the origin. * 3) body.data to the new body. */ request.body.action = 'replace'; request.body.encoding = 'text'; request.body.data = getUpdatedBody(request); } callback(null, request); }; function getUpdatedBody(request) { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we're adding one more param. * * You can put your custom logic here. For example, you can truncate long * bodies from malicious requests. */ params['new-param-name'] = 'new-param-value'; return querystring.stringify(params); } Python import base64 from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': ''' Request body is being replaced. To do this, update the following three fields: 1) body.action to 'replace' 2) body.encoding to the encoding of the new data. Set to one of the following values: text - denotes that the generated body is in text format. Lambda@Edge will propagate this as is. base64 - denotes that the generated body is base64 encoded. Lambda@Edge will base64 decode the data before sending it to the origin. 3) body.data to the new body. ''' request['body']['action'] = 'replace' request['body']['encoding'] = 'text' request['body']['data'] | 2026-01-13T09:30:35 |
https://www.iso.org/es/contents/data/standard/04/25/42546.html | ISO 26000:2010 - Guidance on social responsibility Ir directamente al contenido principal Aplicaciones OBP español English français русский Menú Normas Sectores Salud Tecnologías de la información y afines Gestión y servicios Seguridad, protección y gestión de riesgos Transporte Energía Diversidad e inclusión Sostenibilidad ambiental Alimentos y agricultura Materiales Edificación y construcción Ingeniería Sobre nosotros Perspectivas y actualidad Perspectivas Todos los artículos Salud Inteligencia artificial Cambio climático Transporte Ciberseguridad Gestión de la calidad Energías renovables Seguridad y salud en el trabajo Actualidad Opinión de expertos El mundo de las normas Kit de prensa Resources ISO 22000 explained ISO 9001 explained ISO 14001 explained Participar Tienda Buscar Carrito Reference number ISO 26000:2010 © ISO 2026 International Standard ISO 26000:2010 Guidance on social responsibility Edition 1 2010-11 Vista previa ISO 26000:2010 42546 ISO 26000:2010 Guidance on social responsibility Publicado (Edición 1, 2010) Esta publicación se revisó y confirmó por última vez en 2025. Por lo tanto, esta versión es la actual. ISO 26000:2010 ISO 26000:2010 42546 Idioma Inglés Francés Ruso Español Árabe Formato PDF + ePub Papel PDF + ePub Papel PDF Papel PDF Papel PDF Papel CHF 227 Añadir al carrito * Gastos de envío no incluidos Convertir Franco suizo (CHF) a tu moneda Resumen ISO 26000:2010 provides guidance to all types of organizations, regardless of their size or location, on: concepts, terms and definitions related to social responsibility; the background, trends and characteristics of social responsibility; principles and practices relating to social responsibility; the core subjects and issues of social responsibility; integrating, implementing and promoting socially responsible behaviour throughout the organization and, through its policies and practices, within its sphere of influence; identifying and engaging with stakeholders; and communicating commitments, performance and other information related to social responsibility. ISO 26000:2010 is intended to assist organizations in contributing to sustainable development. It is intended to encourage them to go beyond legal compliance, recognizing that compliance with law is a fundamental duty of any organization and an essential part of their social responsibility. It is intended to promote common understanding in the field of social responsibility, and to complement other instruments and initiatives for social responsibility, not to replace them. In applying ISO 26000:2010, it is advisable that an organization take into consideration societal, environmental, legal, cultural, political and organizational diversity, as well as differences in economic conditions, while being consistent with international norms of behaviour. ISO 26000:2010 is not a management system standard. It is not intended or appropriate for certification purposes or regulatory or contractual use. Any offer to certify, or claims to be certified, to ISO 26000 would be a misrepresentation of the intent and purpose and a misuse of ISO 26000:2010. As ISO 26000:2010 does not contain requirements, any such certification would not be a demonstration of conformity with ISO 26000:2010. ISO 26000:2010 is intended to provide organizations with guidance concerning social responsibility and can be used as part of public policy activities. However, for the purposes of the Marrakech Agreement establishing the World Trade Organization (WTO), it is not intended to be interpreted as an “international standard”, “guideline” or “recommendation”, nor is it intended to provide a basis for any presumption or finding that a measure is consistent with WTO obligations. Further, it is not intended to provide a basis for legal actions, complaints, defences or other claims in any international, domestic or other proceeding, nor is it intended to be cited as evidence of the evolution of customary international law. ISO 26000:2010 is not intended to prevent the development of national standards that are more specific, more demanding, or of a different type. Informaciones generales Estado : Publicado Fecha de publicación : 2010-11 Etapa : Norma Internacional confirmada [ 90.93 ] Edición : 1 Número de páginas : 106 Comité Técnico : ISO/TMBG ICS : 03.100.02 RSS actualizaciones Acción climática Esta norma contribuye a Lograr la neutralidad climática : Mitigación / Emisiones de GEI; Integrar la acción climática en materia de gobernanza y gestión : Clima y gobernanza de las empresas; Ciclo de vida Ahora Publicado ISO 26000:2010 Las normas se revisan cada 5 años Etapa: 90.93 (Confirmada) 00 Preliminar 10 Propuesta 10.99 2005-03-10 Nuevo proyecto aprobado 20 Preparación 20.00 2005-03-10 Nuevo proyecto registrado en el programa de trabajo TC/SC 20.20 2006-03-28 Estudio del borrador del trabajo (WD) iniciado 20.60 2006-04-27 Cierre del periodo de observaciones 20.20 2006-10-09 Estudio del borrador del trabajo (WD) iniciado 20.60 2006-11-10 Cierre del periodo de observaciones 20.20 2007-07-21 Estudio del borrador del trabajo (WD) iniciado 20.99 2008-09-05 Borrador de trabajo aprobado para su registro por el CD 30 Comité 30.00 2008-09-05 Borrador de comité (CD) registrado 30.20 2008-12-12 Estudio de CD iniciado 30.60 2009-03-12 Cierre del periodo de observaciones 30.99 2009-03-27 CD aprobado para su registro como DIS 40 Consulta 40.00 2009-09-07 DIS registrado 40.20 2009-09-09 Voto sobre el DIS iniciado: 12 semanas 40.60 2010-02-15 Cierre de la votación 40.99 2010-06-24 Informe completo distribuido: DIS aprobado para su registro como FDIS 50 Aprobación 50.00 2010-06-24 Texto final recibido o FDIS registrado para su aprobación formal 50.20 2010-07-12 Envío de la prueba a la secretaría o inicio de la votación del FDIS: 8 semanas 50.60 2010-09-13 Cierre de la votación. Prueba devuelta por la secretaría 60 Publicación 60.00 2010-09-13 Norma Internacional en proceso de publicación 60.60 2010-10-28 Norma Internacional publicada 90 Revisión 90.20 2013-10-15 Norma Internacional en proceso de revisión sistemática 90.60 2014-03-20 Cierre de la revisión 90.93 2014-06-06 Norma Internacional confirmada 90.20 2017-01-15 Norma Internacional en proceso de revisión sistemática 90.60 2017-06-07 Cierre de la revisión 90.93 2017-09-15 Norma Internacional confirmada 90.20 2020-10-15 Norma Internacional en proceso de revisión sistemática 90.60 2021-03-05 Cierre de la revisión 90.93 2021-11-29 Norma Internacional confirmada 90.20 2024-10-15 Norma Internacional en proceso de revisión sistemática 90.60 2025-03-05 Cierre de la revisión 90.93 2025-03-19 Norma Internacional confirmada 90.20 Norma Internacional en proceso de revisión sistemática 90.60 Cierre de la revisión 90.99 Retirada de la Norma Internacional propuesta por TC o SC 95 Retirada 95.99 Retirada de la Norma Internacional Esta norma contribuye a los siguientes Objetivos de Desarrollo Sostenible 1 No Poverty 2 Zero Hunger 3 Good Health and Well-being 4 Quality Education 5 Gender Equality 6 Clean Water and Sanitation 7 Affordable and Clean Energy 8 Decent Work and Economic Growth 9 Industry, Innovation and Infrastructure 10 Reduced Inequalities 11 Sustainable Cities and Communities 12 Responsible Consumption and Production 13 Climate Action 14 Life Below Water 15 Life on Land 16 Peace, Justice and Strong Institutions ¿Tiene alguna duda? Consulte nuestras Ayuda y asistencia Tienda Tienda ICS 03 03.100 03.100.02 ISO 26000:2010 Mapa del sitio Normas Beneficios Normas más comunes Evaluación de la conformidad ODS Sectores Salud Tecnologías de la información y afines Gestión y servicios Seguridad, protección y gestión de riesgos Transporte Energía Sostenibilidad ambiental Materiales Sobre nosotros Qué es lo que hacemos Estructura Miembros Events Estrategia Perspectivas y actualidad Perspectivas Todos los artículos Salud Inteligencia artificial Cambio climático Transporte Actualidad Opinión de expertos El mundo de las normas Kit de prensa Resources ISO 22000 explained ISO 9001 explained ISO 14001 explained Participar Who develops standards Deliverables Get involved Colaboración para acelerar una acción climática eficaz Resources Drafting standards Tienda Tienda Publications and products ISO name and logo Privacy Notice Copyright Cookie policy Media kit Jobs Help and support Seguimos haciendo que la vida sea mejor , más fácil y más segura . Inscríbase para recibir actualizaciones por correo electrónico © Reservados todos los derechos Todos los materiales y publicaciones de ISO están protegidos por derechos de autor y sujetos a la aceptación por parte del usuario de las condiciones de derechos de autor de ISO. Cualquier uso, incluida la reproducción, requiere nuestra autorización por escrito. Dirija todas las solicitudes relacionadas con los derechos de autor a copyright@iso.org . Nos comprometemos a garantizar que nuestro sitio web sea accesible para todo el mundo. Si tiene alguna pregunta o sugerencia relacionada con la accesibilidad de este sitio web, póngase en contacto con nosotros. Añadir al carrito | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/zh_cn/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html | Lambda@Edge 函数示例 - Amazon CloudFront Lambda@Edge 函数示例 - Amazon CloudFront 文档 Amazon CloudFront 开发人员指南 一般示例 生成响应 - 示例 查询字符串 - 示例 按国家/地区或设备类型标头个性化内容 - 示例 基于内容的动态源选择 - 示例 更新错误状态 - 示例 访问请求正文 - 示例 Lambda@Edge 函数示例 请参阅以下示例,了解如何将 Lambda 函数与 Amazon CloudFront 结合使用。 注意 如果您为 Lambda@Edge 函数选择运行时 Node.js 18 或更高版本,则会自动为您创建一个 index.mjs 文件。要使用以下代码示例,请改为将 index.mjs 文件重命名为 index.js 。 主题 一般示例 生成响应 - 示例 查询字符串 - 示例 按国家/地区或设备类型标头个性化内容 - 示例 基于内容的动态源选择 - 示例 更新错误状态 - 示例 访问请求正文 - 示例 一般示例 以下示例展示了在 CloudFront 中使用 Lambda@Edge 的常见方法。 主题 示例:A/B 测试 示例:覆盖响应标头 示例:A/B 测试 您可以使用以下示例测试图像的两个不同版本,不需要创建重定向或更改 URL。本示例读取查看器请求中的 Cookie,并相应地修改请求 URL。如果查看器未发送包含预期值之一的 Cookie,则示例会将查看器随机分配给其中一个 URL。 Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; if (request.uri !== '/experiment-pixel.jpg') { // do not process if this is not an A-B test request callback(null, request); return; } const cookieExperimentA = 'X-Experiment-Name=A'; const cookieExperimentB = 'X-Experiment-Name=B'; const pathExperimentA = '/experiment-group/control-pixel.jpg'; const pathExperimentB = '/experiment-group/treatment-pixel.jpg'; /* * Lambda at the Edge headers are array objects. * * Client may send multiple Cookie headers, i.e.: * > GET /viewerRes/test HTTP/1.1 * > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 * > Cookie: First=1; Second=2 * > Cookie: ClientCode=abc * > Host: example.com * * You can access the first Cookie header at headers["cookie"][0].value * and the second at headers["cookie"][1].value. * * Header values are not parsed. In the example above, * headers["cookie"][0].value is equal to "First=1; Second=2" */ let experimentUri; if (headers.cookie) { for (let i = 0; i < headers.cookie.length; i++) { if (headers.cookie[i].value.indexOf(cookieExperimentA) >= 0) { console.log('Experiment A cookie found'); experimentUri = pathExperimentA; break; } else if (headers.cookie[i].value.indexOf(cookieExperimentB) >= 0) { console.log('Experiment B cookie found'); experimentUri = pathExperimentB; break; } } } if (!experimentUri) { console.log('Experiment cookie has not been found. Throwing dice...'); if (Math.random() < 0.75) { experimentUri = pathExperimentA; } else { experimentUri = pathExperimentB; } } request.uri = experimentUri; console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python import json import random def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] if request['uri'] != '/experiment-pixel.jpg': # Not an A/B Test return request cookieExperimentA, cookieExperimentB = 'X-Experiment-Name=A', 'X-Experiment-Name=B' pathExperimentA, pathExperimentB = '/experiment-group/control-pixel.jpg', '/experiment-group/treatment-pixel.jpg' ''' Lambda at the Edge headers are array objects. Client may send multiple cookie headers. For example: > GET /viewerRes/test HTTP/1.1 > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 > Cookie: First=1; Second=2 > Cookie: ClientCode=abc > Host: example.com You can access the first Cookie header at headers["cookie"][0].value and the second at headers["cookie"][1].value. Header values are not parsed. In the example above, headers["cookie"][0].value is equal to "First=1; Second=2" ''' experimentUri = "" for cookie in headers.get('cookie', []): if cookieExperimentA in cookie['value']: print("Experiment A cookie found") experimentUri = pathExperimentA break elif cookieExperimentB in cookie['value']: print("Experiment B cookie found") experimentUri = pathExperimentB break if not experimentUri: print("Experiment cookie has not been found. Throwing dice...") if random.random() < 0.75: experimentUri = pathExperimentA else: experimentUri = pathExperimentB request['uri'] = experimentUri print(f"Request uri set to { experimentUri}") return request 示例:覆盖响应标头 以下示例演示了如何基于其他标头的值来更改响应标头的值。 Node.js export const handler = async (event) => { const response = event.Records[0].cf.response; const headers = response.headers; const headerNameSrc = 'X-Amz-Meta-Last-Modified'; const headerNameDst = 'Last-Modified'; if (headers[headerNameSrc.toLowerCase()]) { headers[headerNameDst.toLowerCase()] = [ { key: headerNameDst, value: headers[headerNameSrc.toLowerCase()][0].value, }]; console.log(`Response header "$ { headerNameDst}" was set to ` + `"$ { headers[headerNameDst.toLowerCase()][0].value}"`); } return response; }; Python import json def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] headers = response['headers'] header_name_src = 'X-Amz-Meta-Last-Modified' header_name_dst = 'Last-Modified' if headers.get(header_name_src.lower()): headers[header_name_dst.lower()] = [ { 'key': header_name_dst, 'value': headers[header_name_src.lower()][0]['value'] }] print(f'Response header " { header_name_dst}" was set to ' f'" { headers[header_name_dst.lower()][0]["value"]}"') return response 生成响应 - 示例 以下示例展示了如何使用 Lambda@Edge 生成响应。 主题 示例:提供静态内容(生成的响应) 示例:生成 HTTP 重定向(生成的响应) 示例:提供静态内容(生成的响应) 以下示例演示了如何使用 Lambda 函数来提供静态网站内容,这样可减少源服务器上的负载,并减少总体延迟。 注意 您可以针对查看器请求和源请求事件生成 HTTP 响应。有关更多信息,请参阅 在请求触发器中生成 HTTP 响应 。 您也可以替换或删除源响应事件中 HTTP 响应的正文。有关更多信息,请参阅 更新源响应触发器中的 HTTP 响应 。 Node.js 'use strict'; const content = ` <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> `; exports.handler = (event, context, callback) => { /* * Generate HTTP OK response using 200 status code with HTML body. */ const response = { status: '200', statusDescription: 'OK', headers: { 'cache-control': [ { key: 'Cache-Control', value: 'max-age=100' }], 'content-type': [ { key: 'Content-Type', value: 'text/html' }] }, body: content, }; callback(null, response); }; Python import json CONTENT = """ <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> """ def lambda_handler(event, context): # Generate HTTP OK response using 200 status code with HTML body. response = { 'status': '200', 'statusDescription': 'OK', 'headers': { 'cache-control': [ { 'key': 'Cache-Control', 'value': 'max-age=100' } ], "content-type": [ { 'key': 'Content-Type', 'value': 'text/html' } ] }, 'body': CONTENT } return response 示例:生成 HTTP 重定向(生成的响应) 以下示例演示了如何生成 HTTP 重定向。 注意 您可以针对查看器请求和源请求事件生成 HTTP 响应。有关更多信息,请参阅 在请求触发器中生成 HTTP 响应 。 Node.js 'use strict'; exports.handler = (event, context, callback) => { /* * Generate HTTP redirect response with 302 status code and Location header. */ const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html', }], }, }; callback(null, response); }; Python def lambda_handler(event, context): # Generate HTTP redirect response with 302 status code and Location header. response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html' }] } } return response 查询字符串 - 示例 以下示例展示了将 Lambda@Edge 与查询字符串结合使用的方法。 主题 示例:根据查询字符串参数添加标头 示例:标准化查询字符串参数以提高缓存命中率 示例:将未经身份验证的用户重定向到登录页面 示例:根据查询字符串参数添加标头 下面的示例演示如何获取查询字符串参数的键-值对,然后根据这些值添加标头。 Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When a request contains a query string key-value pair but the origin server * expects the value in a header, you can use this Lambda function to * convert the key-value pair to a header. Here's what the function does: * 1. Parses the query string and gets the key-value pair. * 2. Adds a header to the request using the key-value pair that the function got in step 1. */ /* Parse request querystring to get javascript object */ const params = querystring.parse(request.querystring); /* Move auth param from querystring to headers */ const headerName = 'Auth-Header'; request.headers[headerName.toLowerCase()] = [ { key: headerName, value: params.auth }]; delete params.auth; /* Update request querystring */ request.querystring = querystring.stringify(params); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When a request contains a query string key-value pair but the origin server expects the value in a header, you can use this Lambda function to convert the key-value pair to a header. Here's what the function does: 1. Parses the query string and gets the key-value pair. 2. Adds a header to the request using the key-value pair that the function got in step 1. ''' # Parse request querystring to get dictionary/json params = { k : v[0] for k, v in parse_qs(request['querystring']).items()} # Move auth param from querystring to headers headerName = 'Auth-Header' request['headers'][headerName.lower()] = [ { 'key': headerName, 'value': params['auth']}] del params['auth'] # Update request querystring request['querystring'] = urlencode(params) return request 示例:标准化查询字符串参数以提高缓存命中率 下面的示例演示如何在 CloudFront 将请求转发给您的源之前通过对查询字符串进行以下更改来提高缓存命中率: 按参数名称的字母顺序排列键/值对 将键/值对的大小写更改为小写 有关更多信息,请参阅 根据查询字符串参数缓存内容 。 Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When you configure a distribution to forward query strings to the origin and * to cache based on an allowlist of query string parameters, we recommend * the following to improve the cache-hit ratio: * - Always list parameters in the same order. * - Use the same case for parameter names and values. * * This function normalizes query strings so that parameter names and values * are lowercase and parameter names are in alphabetical order. * * For more information, see: * https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html */ console.log('Query String: ', request.querystring); /* Parse request query string to get javascript object */ const params = querystring.parse(request.querystring.toLowerCase()); const sortedParams = { }; /* Sort param keys */ Object.keys(params).sort().forEach(key => { sortedParams[key] = params[key]; }); /* Update request querystring with normalized */ request.querystring = querystring.stringify(sortedParams); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When you configure a distribution to forward query strings to the origin and to cache based on an allowlist of query string parameters, we recommend the following to improve the cache-hit ratio: Always list parameters in the same order. - Use the same case for parameter names and values. This function normalizes query strings so that parameter names and values are lowercase and parameter names are in alphabetical order. For more information, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html ''' print("Query string: ", request["querystring"]) # Parse request query string to get js object params = { k : v[0] for k, v in parse_qs(request['querystring'].lower()).items()} # Sort param keys sortedParams = sorted(params.items(), key=lambda x: x[0]) # Update request querystring with normalized request['querystring'] = urlencode(sortedParams) return request 示例:将未经身份验证的用户重定向到登录页面 下面的示例演示如何将未输入其凭证的用户重定向到登录页面。 Node.js 'use strict'; function parseCookies(headers) { const parsedCookie = { }; if (headers.cookie) { headers.cookie[0].value.split(';').forEach((cookie) => { if (cookie) { const parts = cookie.split('='); parsedCookie[parts[0].trim()] = parts[1].trim(); } }); } return parsedCookie; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* Check for session-id in request cookie in viewer-request event, * if session-id is absent, redirect the user to sign in page with original * request sent as redirect_url in query params. */ /* Check for session-id in cookie, if present then proceed with request */ const parsedCookies = parseCookies(headers); if (parsedCookies && parsedCookies['session-id']) { callback(null, request); return; } /* URI encode the original request to be sent as redirect_url in query params */ const encodedRedirectUrl = encodeURIComponent(`https://$ { headers.host[0].value}$ { request.uri}?$ { request.querystring}`); const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: `https://www.example.com/signin?redirect_url=$ { encodedRedirectUrl}`, }], }, }; callback(null, response); }; Python import urllib def parseCookies(headers): parsedCookie = { } if headers.get('cookie'): for cookie in headers['cookie'][0]['value'].split(';'): if cookie: parts = cookie.split('=') parsedCookie[parts[0].strip()] = parts[1].strip() return parsedCookie def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Check for session-id in request cookie in viewer-request event, if session-id is absent, redirect the user to sign in page with original request sent as redirect_url in query params. ''' # Check for session-id in cookie, if present, then proceed with request parsedCookies = parseCookies(headers) if parsedCookies and parsedCookies['session-id']: return request # URI encode the original request to be sent as redirect_url in query params redirectUrl = "https://%s%s?%s" % (headers['host'][0]['value'], request['uri'], request['querystring']) encodedRedirectUrl = urllib.parse.quote_plus(redirectUrl.encode('utf-8')) response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://www.example.com/signin?redirect_url=%s' % encodedRedirectUrl }] } } return response 按国家/地区或设备类型标头个性化内容 - 示例 以下示例展示了如何使用 Lambda@Edge,基于位置或查看器使用的设备类型来自定义行为。 主题 示例:将查看器请求重定向到国家/地区特定的 URL 示例:根据设备提供不同版本的对象 示例:将查看器请求重定向到国家/地区特定的 URL 下面的示例演示如何生成包含国家/地区特定的 URL 的 HTTP 重定向响应并将该响应返回到查看器。在您希望提供国家/地区特定的响应时,这非常有用。例如: 如果您有国家/地区特定的子域,例如 us.example.com 和 tw.example.com,则在查看器请求 example.com 时,您可以生成重定向响应。 如果您要流式传输视频,但您在特定国家/地区中无权流式传输内容,则可以将该国家/地区中的用户重定向到说明他们为何无法观看视频的页面。 请注意以下几点: 您必须将您的分配配置为基于 CloudFront-Viewer-Country 标头进行缓存。有关更多信息,请参阅 基于选择的请求标头进行缓存 。 CloudFront 在查看器请求事件之后添加 CloudFront-Viewer-Country 标头。要使用此示例,您必须为源请求事件创建触发器。 Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Based on the value of the CloudFront-Viewer-Country header, generate an * HTTP status code 302 (Redirect) response, and return a country-specific * URL in the Location header. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ let url = 'https://example.com/'; if (headers['cloudfront-viewer-country']) { const countryCode = headers['cloudfront-viewer-country'][0].value; if (countryCode === 'TW') { url = 'https://tw.example.com/'; } else if (countryCode === 'US') { url = 'https://us.example.com/'; } } const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: url, }], }, }; callback(null, response); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Based on the value of the CloudFront-Viewer-Country header, generate an HTTP status code 302 (Redirect) response, and return a country-specific URL in the Location header. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' url = 'https://example.com/' viewerCountry = headers.get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'TW': url = 'https://tw.example.com/' elif countryCode == 'US': url = 'https://us.example.com/' response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': url }] } } return response 示例:根据设备提供不同版本的对象 下面的示例演示如何根据用户使用的设备的类型 (例如,移动设备或平板电脑) 提供不同版本的对象。请注意以下几点: 您必须将您的分配配置为基于 CloudFront-Is-*-Viewer 标头进行缓存。有关更多信息,请参阅 基于选择的请求标头进行缓存 。 CloudFront 在查看器请求事件之后添加 CloudFront-Is-*-Viewer 标头。要使用此示例,您必须为源请求事件创建触发器。 Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Serve different versions of an object based on the device type. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Is-*-Viewer headers. For more information, see * the following documentation: * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type * 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const desktopPath = '/desktop'; const mobilePath = '/mobile'; const tabletPath = '/tablet'; const smarttvPath = '/smarttv'; if (headers['cloudfront-is-desktop-viewer'] && headers['cloudfront-is-desktop-viewer'][0].value === 'true') { request.uri = desktopPath + request.uri; } else if (headers['cloudfront-is-mobile-viewer'] && headers['cloudfront-is-mobile-viewer'][0].value === 'true') { request.uri = mobilePath + request.uri; } else if (headers['cloudfront-is-tablet-viewer'] && headers['cloudfront-is-tablet-viewer'][0].value === 'true') { request.uri = tabletPath + request.uri; } else if (headers['cloudfront-is-smarttv-viewer'] && headers['cloudfront-is-smarttv-viewer'][0].value === 'true') { request.uri = smarttvPath + request.uri; } console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Serve different versions of an object based on the device type. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Is-*-Viewer headers. For more information, see the following documentation: https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' desktopPath = '/desktop'; mobilePath = '/mobile'; tabletPath = '/tablet'; smarttvPath = '/smarttv'; if 'cloudfront-is-desktop-viewer' in headers and headers['cloudfront-is-desktop-viewer'][0]['value'] == 'true': request['uri'] = desktopPath + request['uri'] elif 'cloudfront-is-mobile-viewer' in headers and headers['cloudfront-is-mobile-viewer'][0]['value'] == 'true': request['uri'] = mobilePath + request['uri'] elif 'cloudfront-is-tablet-viewer' in headers and headers['cloudfront-is-tablet-viewer'][0]['value'] == 'true': request['uri'] = tabletPath + request['uri'] elif 'cloudfront-is-smarttv-viewer' in headers and headers['cloudfront-is-smarttv-viewer'][0]['value'] == 'true': request['uri'] = smarttvPath + request['uri'] print("Request uri set to %s" % request['uri']) return request 基于内容的动态源选择 - 示例 以下示例展示了如何使用 Lambda@Edge,基于请求中的信息路由到不同的源。 主题 示例:使用源请求触发器从自定义源更改为 Amazon S3 源 示例:使用源请求触发器更改 Amazon S3 源区域 示例:使用源请求触发器从 Amazon S3 源更改为自定义源 示例:使用源请求触发器将流量从一个 Amazon S3 存储桶逐步转移到另一个存储桶 示例:使用源请求触发器根据国家/地区标头更改源域名 示例:使用源请求触发器从自定义源更改为 Amazon S3 源 此函数演示如何根据请求属性,使用源请求触发器将从中提取内容的自定义源更改为 Amazon S3 源。 Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if S3 origin should be used, and * if true, sets S3 origin properties. */ const params = querystring.parse(request.querystring); if (params['useS3Origin']) { if (params['useS3Origin'] === 'true') { const s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com'; /* Set S3 origin fields */ request.origin = { s3: { domainName: s3DomainName, region: '', authMethod: 'origin-access-identity', path: '', customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: s3DomainName}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' Reads query string to check if S3 origin should be used, and if true, sets S3 origin properties ''' params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useS3Origin') == 'true': s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com' # Set S3 origin fields request['origin'] = { 's3': { 'domainName': s3DomainName, 'region': '', 'authMethod': 'origin-access-identity', 'path': '', 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': s3DomainName}] return request 示例:使用源请求触发器更改 Amazon S3 源区域 此函数演示如何根据请求属性,使用源请求触发器更改从中提取内容的 Amazon S3 源。 在此例中,我们使用 CloudFront-Viewer-Country 标头的值将 S3 存储桶域名更新为更接近查看器的区域中的存储桶。这在多种情况下非常有用: 当指定的区域接近查看器所在的国家/地区时,这可以减少延迟。 通过确保由与发起请求所在位置的相同国家/地区的源提供数据,实现数据主权。 要使用本示例,您必须执行以下操作: 将您的分配配置为基于 CloudFront-Viewer-Country 标头进行缓存。有关更多信息,请参阅 基于选择的请求标头进行缓存 。 在源请求事件中为此函数创建一个触发器。CloudFront 在查看器请求事件后添加了 CloudFront-Viewer-Country 标头,因此,要使用此示例,您必须确保函数对源请求执行。 注意 以下示例代码对您用于源的所有 S3 存储桶使用相同的来源访问身份(OAI)。有关更多信息,请参阅 来源访问身份 。 Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * This blueprint demonstrates how an origin-request trigger can be used to * change the origin from which the content is fetched, based on request properties. * In this example, we use the value of the CloudFront-Viewer-Country header * to update the S3 bucket domain name to a bucket in a Region that is closer to * the viewer. * * This can be useful in several ways: * 1) Reduces latencies when the Region specified is nearer to the viewer's * country. * 2) Provides data sovereignty by making sure that data is served from an * origin that's in the same country that the request came from. * * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' }; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; const region = countryToRegion[countryCode]; /** * If the viewer's country is not in the list you specify, the request * goes to the default S3 bucket you've configured. */ if (region) { /** * If you've set up OAI, the bucket policy in the destination bucket * should allow the OAI GetObject operation, as configured by default * for an S3 origin with OAI. Another requirement with OAI is to provide * the Region so it can be used for the SIGV4 signature. Otherwise, the * Region is not required. */ request.origin.s3.region = region; const domainName = `amzn-s3-demo-bucket-in-$ { region}.s3.$ { region}.amazonaws.com`; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName }]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' This blueprint demonstrates how an origin-request trigger can be used to change the origin from which the content is fetched, based on request properties. In this example, we use the value of the CloudFront-Viewer-Country header to update the S3 bucket domain name to a bucket in a Region that is closer to the viewer. This can be useful in several ways: 1) Reduces latencies when the Region specified is nearer to the viewer's country. 2) Provides data sovereignty by making sure that data is served from an origin that's in the same country that the request came from. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' } viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] region = countryToRegion.get(countryCode) # If the viewer's country in not in the list you specify, the request # goes to the default S3 bucket you've configured if region: ''' If you've set up OAI, the bucket policy in the destination bucket should allow the OAI GetObject operation, as configured by default for an S3 origin with OAI. Another requirement with OAI is to provide the Region so it can be used for the SIGV4 signature. Otherwise, the Region is not required. ''' request['origin']['s3']['region'] = region domainName = 'amzn-s3-demo-bucket-in- { 0}.s3. { 0}.amazonaws.com'.format(region) request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request 示例:使用源请求触发器从 Amazon S3 源更改为自定义源 该函数演示如何根据请求属性,使用源请求触发器更改从中提取内容的自定义源。 Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if custom origin should be used, and * if true, sets custom origin properties. */ const params = querystring.parse(request.querystring); if (params['useCustomOrigin']) { if (params['useCustomOrigin'] === 'true') { /* Set custom origin fields*/ request.origin = { custom: { domainName: 'www.example.com', port: 443, protocol: 'https', path: '', sslProtocols: ['TLSv1', 'TLSv1.1'], readTimeout: 5, keepaliveTimeout: 5, customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: 'www.example.com'}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] # Reads query string to check if custom origin should be used, and # if true, sets custom origin properties params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useCustomOrigin') == 'true': # Set custom origin fields request['origin'] = { 'custom': { 'domainName': 'www.example.com', 'port': 443, 'protocol': 'https', 'path': '', 'sslProtocols': ['TLSv1', 'TLSv1.1'], 'readTimeout': 5, 'keepaliveTimeout': 5, 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': 'www.example.com'}] return request 示例:使用源请求触发器将流量从一个 Amazon S3 存储桶逐步转移到另一个存储桶 此函数演示如何以可控的方式将流量从一个 Amazon S3 存储桶逐步转移到另一个存储桶。 Node.js 'use strict'; function getRandomInt(min, max) { /* Random number is inclusive of min and max*/ return Math.floor(Math.random() * (max - min + 1)) + min; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const BLUE_TRAFFIC_PERCENTAGE = 80; /** * This Lambda function demonstrates how to gradually transfer traffic from * one S3 bucket to another in a controlled way. * We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from * 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic * is re-directed to blue-bucket. If not, the default bucket that we've configured * is used. */ const randomNumber = getRandomInt(1, 100); if (randomNumber <= BLUE_TRAFFIC_PERCENTAGE) { const domainName = 'blue-bucket.s3.amazonaws.com'; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } callback(null, request); }; Python import math import random def getRandomInt(min, max): # Random number is inclusive of min and max return math.floor(random.random() * (max - min + 1)) + min def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] BLUE_TRAFFIC_PERCENTAGE = 80 ''' This Lambda function demonstrates how to gradually transfer traffic from one S3 bucket to another in a controlled way. We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic is re-directed to blue-bucket. If not, the default bucket that we've configured is used. ''' randomNumber = getRandomInt(1, 100) if randomNumber <= BLUE_TRAFFIC_PERCENTAGE: domainName = 'blue-bucket.s3.amazonaws.com' request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request 示例:使用源请求触发器根据国家/地区标头更改源域名 此函数演示了如何根据 CloudFront-Viewer-Country 标头更改源域名,这样可以从接近查看器所在的国家/地区的源提供内容。 为您的分配实施此功能可能有类似于下面的好处: 在指定的区域接近查看器所在的国家/地区时减少延迟。 确保由请求发起位置所在的同一国家/地区内的源提供数据,从而实现数据主权。 请注意,要启用此功能,您必须配置分配以根据 CloudFront-Viewer-Country 标头进行缓存。有关更多信息,请参阅 基于选择的请求标头进行缓存 。 Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; if (countryCode === 'GB' || countryCode === 'DE' || countryCode === 'IE' ) { const domainName = 'eu.example.com'; request.origin.custom.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'GB' or countryCode == 'DE' or countryCode == 'IE': domainName = 'eu.example.com' request['origin']['custom']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request 更新错误状态 - 示例 以下示例指导您使用 Lambda@Edge 来更改返回给用户的错误状态。 主题 示例:使用源响应触发器将错误状态代码更新为 200 示例:使用源响应触发器将错误状态代码更新为 302 示例:使用源响应触发器将错误状态代码更新为 200 该函数演示了在下列情况中,如何将响应状态更新为 200 并生成静态正文内容以返回到查看器: 函数在源响应中触发。 来自源服务器的响应状态是错误状态代码(4xx 或 5xx)。 Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; /** * This function updates the response status to 200 and generates static * body content to return to the viewer in the following scenario: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { response.status = 200; response.statusDescription = 'OK'; response.body = 'Body generation example'; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] ''' This function updates the response status to 200 and generates static body content to return to the viewer in the following scenario: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: response['status'] = 200 response['statusDescription'] = 'OK' response['body'] = 'Body generation example' return response 示例:使用源响应触发器将错误状态代码更新为 302 该函数演示了如何将 HTTP 状态代码更新为 302,以重定向到配置了不同源的其他路径(缓存行为)。请注意以下几点: 函数在源响应中触发。 来自源服务器的响应状态是错误状态代码(4xx 或 5xx)。 Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; const request = event.Records[0].cf.request; /** * This function updates the HTTP status code in the response to 302, to redirect to another * path (cache behavior) that has a different origin configured. Note the following: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { const redirect_path = `/plan-b/path?$ { request.querystring}`; response.status = 302; response.statusDescription = 'Found'; /* Drop the body, as it is not required for redirects */ response.body = ''; response.headers['location'] = [ { key: 'Location', value: redirect_path }]; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] request = event['Records'][0]['cf']['request'] ''' This function updates the HTTP status code in the response to 302, to redirect to another path (cache behavior) that has a different origin configured. Note the following: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: redirect_path = '/plan-b/path?%s' % request['querystring'] response['status'] = 302 response['statusDescription'] = 'Found' # Drop the body as it is not required for redirects response['body'] = '' response['headers']['location'] = [ { 'key': 'Location', 'value': redirect_path}] return response 访问请求正文 - 示例 以下示例展示了如何使用 Lambda@Edge 处理 POST 请求。 注意 要使用这些示例,必须在分配的 Lambda 函数关联中启用 include body (包含正文)选项。默认情况下,将不会启用此选项。 要在 CloudFront 控制台中启用此设置,请选中 Lambda 函数关联 中的 包含正文 复选框。 要在 CloudFront API 中或使用 CloudFormation 启用此设置,请在 LambdaFunctionAssociation 中将 IncludeBody 字段设置为 true 。 主题 示例:使用请求触发器读取 HTML 表单 示例:使用请求触发器修改 HTML 表单 示例:使用请求触发器读取 HTML 表单 该函数说明了如何处理 HTML 表单(Web 表单)生成的 POST 请求的正文,例如“联系我们”表单。例如,您可能具有如下所示的 HTML 表单: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> 对于后面的函数示例,必须在 CloudFront 查看器请求或源请求中触发该函数。 Node.js 'use strict'; const querystring = require('querystring'); /** * This function demonstrates how you can read the body of a POST request * generated by an HTML form (web form). The function is triggered in a * CloudFront viewer request or origin request event type. */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.method === 'POST') { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send the data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we only log the form fields here. * You can put your custom logic here. For example, you can store the * fields in a database, such as Amazon DynamoDB, and generate a response * right from your Lambda@Edge function. */ for (let param in params) { console.log(`For "$ { param}" user submitted "$ { params[param]}".\n`); } } return callback(null, request); }; Python import base64 from urllib.parse import parse_qs ''' Say there is a POST request body generated by an HTML such as: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> ''' ''' This function demonstrates how you can read the body of a POST request generated by an HTML form (web form). The function is triggered in a CloudFront viewer request or origin request event type. ''' def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': # HTTP body is always passed as base64-encoded string. Decode it body = base64.b64decode(request['body']['data']) # HTML forms send the data in query string format. Parse it params = { k: v[0] for k, v in parse_qs(body).items()} ''' For demonstration purposes, we only log the form fields here. You can put your custom logic here. For example, you can store the fields in a database, such as Amazon DynamoDB, and generate a response right from your Lambda@Edge function. ''' for key, value in params.items(): print("For %s use submitted %s" % (key, value)) return request 示例:使用请求触发器修改 HTML 表单 该函数说明了如何修改 HTML 表单(Web 表单)生成的 POST 请求的正文。在 CloudFront 查看器请求或源请求中触发该函数。 Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { var request = event.Records[0].cf.request; if (request.method === 'POST') { /* Request body is being replaced. To do this, update the following /* three fields: * 1) body.action to 'replace' * 2) body.encoding to the encoding of the new data. * * Set to one of the following values: * * text - denotes that the generated body is in text format. * Lambda@Edge will propagate this as is. * base64 - denotes that the generated body is base64 encoded. * Lambda@Edge will base64 decode the data before sending * it to the origin. * 3) body.data to the new body. */ request.body.action = 'replace'; request.body.encoding = 'text'; request.body.data = getUpdatedBody(request); } callback(null, request); }; function getUpdatedBody(request) { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we're adding one more param. * * You can put your custom logic here. For example, you can truncate long * bodies from malicious requests. */ params['new-param-name'] = 'new-param-value'; return querystring.stringify(params); } Python import base64 from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': ''' Request body is being replaced. To do this, update the following three fields: 1) body.action to 'replace' 2) body.encoding to the encoding of the new data. Set to one of the following values: text - denotes that the generated body is in text format. Lambda@Edge will propagate this as is. base64 - denotes that the generated body is base64 encoded. Lambda@Edge will base64 decode the data before sending it to the origin. 3) body.data to the new body. ''' request['body']['action'] = 'replace' request['body']['encoding'] = 'text' request['body']['data'] = getUpdatedBody(request) return request def getUpdatedBody(request): # HTTP body is always passed as base64-encoded string. Decode it body = base64.b64decode(request['body']['data']) # HTML forms send data in query string format. Parse it params = { k: v[0] for k, v in parse_qs(body).items()} # For demonstration purposes, we're adding one more param # You can put your custom logic here. For example, you can truncate long # bodies from malicious requests params['new-param-name'] = 'new-param-value' return urlencode(params) Javascript 在您的浏览器中被禁用或不可用。 要使用 Amazon Web Services 文档,必须启用 Javascript。请参阅浏览器的帮助页面以了解相关说明。 文档惯例 使用请求和响应 边缘函数的限制 此页面对您有帮助吗?- 是 感谢您对我们工作的肯定! 如果不耽误您的时间,请告诉我们做得好的地方,让我们做得更好。 此页面对您有帮助吗?- 否 感谢您告诉我们本页内容还需要完善。很抱歉让您失望了。 如果不耽误您的时间,请告诉我们如何改进文档。 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/fr_fr/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | Commencez avec CloudFront - Amazon CloudFront Commencez avec CloudFront - Amazon CloudFront Documentation Amazon CloudFront Guide du développeur Les traductions sont fournies par des outils de traduction automatique. En cas de conflit entre le contenu d'une traduction et celui de la version originale en anglais, la version anglaise prévaudra. Commencez avec CloudFront Les rubriques de cette section vous montrent comment commencer à diffuser votre contenu avec Amazon CloudFront. La Configurez votre Compte AWS rubrique décrit les prérequis pour les didacticiels suivants, tels que la création d'un Compte AWS et la création d'un utilisateur doté d'un accès administratif. Le didacticiel de distribution de base vous montre comment configurer le contrôle d’accès d’origine (OAC) pour envoyer des demandes authentifiées à une origine Amazon S3. Le didacticiel pour site web statique sécurisé explique comment créer un site web statique sécurisé pour votre nom de domaine à l’aide d’un OAC et d’une origine Amazon S3. Le didacticiel utilise un modèle Amazon CloudFront (CloudFront) pour la configuration et le déploiement. Rubriques Configurez votre Compte AWS Commencez avec une distribution CloudFront standard Mise en route avec une distribution standard (AWS CLI) Mise en route avec un site web statique sécurisé JavaScript est désactivé ou n'est pas disponible dans votre navigateur. Pour que vous puissiez utiliser la documentation AWS, Javascript doit être activé. Vous trouverez des instructions sur les pages d'aide de votre navigateur. Conventions de rédaction Utilisation des AWS SDK Configurez votre Compte AWS Cette page vous a-t-elle été utile ? - Oui Merci de nous avoir fait part de votre satisfaction. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer ce qui vous a plu afin que nous puissions nous améliorer davantage. Cette page vous a-t-elle été utile ? - Non Merci de nous avoir avertis que cette page avait besoin d'être retravaillée. Nous sommes désolés de ne pas avoir répondu à vos attentes. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer comment nous pourrions améliorer cette documentation. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/zh_tw/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | 開始使用 CloudFront - Amazon CloudFront 開始使用 CloudFront - Amazon CloudFront 文件 Amazon CloudFront 開發人員指南 本文為英文版的機器翻譯版本,如內容有任何歧義或不一致之處,概以英文版為準。 開始使用 CloudFront 本節主題說明如何開始使用 Amazon CloudFront 交付您的內容。 設定您的 AWS 帳戶 主題說明下列教學課程的先決條件,例如建立 AWS 帳戶 和建立具有管理存取權的使用者。 基本分佈教學課程說明如何設定原始存取控制 (OAC),將已驗證的請求傳送至 Amazon S3 原始伺服器。 安全靜態網站教學課程說明如何使用 OAC 搭配 Amazon S3 原始伺服器為您的網域名稱建立安全靜態網站。本教學課程使用 Amazon CloudFront (CloudFront) 範本進行設定和部署。 主題 設定您的 AWS 帳戶 開始使用 CloudFront 標準分佈 開始使用標準分佈 (AWS CLI) 開始使用安全的靜態網站 您的瀏覽器已停用或無法使用 Javascript。 您必須啟用 Javascript,才能使用 AWS 文件。請參閱您的瀏覽器說明頁以取得說明。 文件慣用形式 使用 AWS SDK 設定您的 AWS 帳戶 此頁面是否有幫助? - 是 感謝您,讓我們知道我們做得很好! 若您有空,歡迎您告知我們值得讚許的地方,這樣才能保持良好服務。 此頁面是否有幫助? - 否 感謝讓我們知道此頁面仍須改善。很抱歉,讓您失望。 若您有空,歡迎您提供改善文件的方式。 | 2026-01-13T09:30:35 |
https://www.visma.com/voiceofvisma/ep-18-making-inclusion-part-of-our-everyday-work-with-ida-algotsson | Ep 18: Making inclusion part of our everyday work with Ida Algotsson Who we are About us Connected by software – driven by people Become a Visma company Join our family of thriving SaaS companies Technology and AI at Visma Innovation with customer value at its heart Our sponsorship Team Visma | Lease a Bike Sustainability A better impact through software Contact us Find the right contact information What we offer Cloud software We create brilliant ways to work For medium businesses Lead your business with clarity For small businesses Start, run and grow with ease For public sector Empower efficient societies For accounting offices Build your dream accounting office For partners Help us keep customers ahead For investors For investors Latest results, news and strategy Financials Key figures, quarterly and annual results Events Financial calendar Governance Policies, management, board and owners Careers Careers at Visma Join the business software revolution Locations Find your nearest office Open positions Turn your passion into a career Resources News For small businesses Cloud accounting software built for small businesses Who we are About us Technology and AI at Visma Sustainability Become a Visma company Our sponsorship What we offer Cloud software For small businesses For accounting offices For enterprises Public sector For partners For investors Overview Financials Governance News and press Events Careers Careers at Visma Open positions Hubs Resources Blog Visma Developer Trust Centre News Press releases Team Visma | Lease a Bike Podcast Ep 18: Making inclusion part of our everyday work with Ida Algotsson Voice of Visma June 4, 2025 Spotify Created with Sketch. YouTube Apple Podcasts Amazon Music <iframe style="border-radius:12px" src="https://www.youtube.com/embed/LbvTSUflnAU?si=VfKEwT0-oti16mj7" width="100%" height="500" frameBorder="0" allowfullscreen="" allow="autoplay; clipboard-write; encrypted-media; fullscreen; picture-in-picture" loading="lazy"></iframe> About the episode What does inclusion truly mean at Visma – not just as values, but as everyday actions? Join Ida Algotsson and Henny Hasselknippe Dahl-Hansen as they reflect on what drives our passion for DE&I, how our internal community supports employees across the company, and why inclusion remains a core part of who we are. Share More from Voice of Visma We're sitting down with leaders and colleagues from around Visma to share their stories, industry knowledge, and valuable career lessons. With the Voice of Visma podcast, we’re bringing our people and culture closer to you. Get to know the podcast Ep 22: Building, learning, and accelerating growth in the SaaS world with Maxin Schneider Entrepreneurial leadership often grows through experience, and Maxin Schneider has seen that up close. Read more Ep 21: How DEI fuels business success with Iveta Bukane Why DEI isn't just a moral imperative—it’s a business necessity. Read more Ep 20: Driving tangible sustainability outcomes with Freja Landewall Discover how ESG goes far beyond the environment, encompassing people, governance, and the long-term resilience of business. Read more Ep 19: Future-proofing public services in Sweden with Marie Ceder Between demographic changes, the rise in AI, and digitalisation, the public sector is at a pivotal moment. Read more Ep 18: Making inclusion part of our everyday work with Ida Algotsson What does inclusion truly mean at Visma – not just as values, but as everyday actions? Read more Ep 17: Sustainability at the heart of business with Robin Åkerberg Honouring our responsibility goes well beyond the numbers – it starts with a shared purpose and values. Read more Ep 16: Innovation for the public good with Kasper Lyhr Serving the public sector goes way beyond software – it’s about shaping the future of society as a whole. Read more Ep 15: Leading with transparency and vulnerability with Ellen Sano What does it mean to be a “firestarter” in business? Read more Ep 14: Women, innovation, and the future of Visma with Merete Hverven Our CEO, Merete, knows that great leadership takes more than just hard work – it takes vision. Read more Ep 13: Building partnerships beyond software with Daniel Ognøy Kaspersen What does it look like when an accounting software company delivers more than just great software? Read more Ep 12: AI in the accounting sphere with Joris Joppe Artificial intelligence is changing industries across the board, and accounting is no exception. But in such a highly specialised field, what does change actually look like? Read more Ep 11: From Founder to Segment Director with Ari-Pekka Salovaara Ari-Pekka is a serial entrepreneur who joined Visma when his company was acquired in 2010. He now leads the small business segment. Read more Ep 10: When brave choices can save a company with Charlotte von Sydow What’s it like stepping in as the Managing Director for a company in decline? Read more Ep 09: Revolutionising tax tech in Italy with Enrico Mattiazzi and Vito Lomele Take one look at their product, their customer reviews, or their workplace awards, and it’s clear why Fiscozen leads Italy’s tax tech scene. Read more Ep 08: Navigating the waters of entrepreneurship with Steffen Torp When it comes to being an entrepreneur, the journey is as personal as it is unpredictable. Read more Ep 07: The untold stories of Visma with Øystein Moan What did Visma look like in its early days? Are there any decisions our former CEO would have made differently? Read more Ep 06: Measure what matters: Employee engagement with Vibeke Müller Research shows that having engaged, happy employees is so important for building a great company culture and performing better financially. Read more Ep 05: Our Team Visma | Lease a Bike sponsorship with Anne-Grethe Thomle Karlsen It’s one thing to sponsor the world’s best cycling team; it’s a whole other thing to provide software and expertise that helps them do what they do best. Read more Ep 04: “How do you make people care about security?” with Joakim Tauren With over 700 applications across the Visma Group (and counting!), cybersecurity is make-or-break for us. Read more Ep 03: The human side of enterprise with Yvette Hoogewerf As a software company, our products are central to our business… but that’s only one part of the equation. Read more Ep 02: From Management Trainee to CFO with Stian Grindheim How does someone work their way up from Management Trainee to CFO by the age of 30? And balance fatherhood alongside it all? Read more Ep 01: An optimistic look at the future of AI with Jacob Nyman We’re all-too familiar with the fears surrounding artificial intelligence. So today, Jacob and Johan are flipping the script. Read more (Trailer) Introducing: Voice of Visma These are the stories that shape us... and the reason Visma is unlike anywhere else. Read more Visma Software International AS Organisation number: 980858073 MVA (Foretaksregisteret/The Register of Business Enterprises) Main office Karenslyst allé 56 0277 Oslo Norway Postal address PO box 733, Skøyen 0214 Oslo Norway visma@visma.com Visma on LinkedIn Who we are About us Technology at Visma Sustainability Become a Visma company Our sponsorship Contact us What we offer For small businesses For accounting offices For medium businesses For public sector For partners e-invoicing Digital signature For investors Overview Financials Governance Events Careers Careers at Visma Open positions Hubs Resources Blog Trust Centre Community News Press ©️ 2026 Visma Privacy policy Cookie policy Whistleblowing Cookies settings Transparency Act Change country | 2026-01-13T09:30:35 |
https://support.microsoft.com/et-ee/windows/windowsi-turbega-seotud-abi-1f230c8e-2d3e-48ae-a9b9-a0e51d6c0724 | Windowsi turbega seotud abi - Microsofti tugiteenus Seotud teemad × Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Põhisisu juurde Microsoft Tugi Tugi Tugi Avaleht Microsoft 365 Office Tooted Microsoft 365 Outlook Microsoft Teams OneDrive Microsoft Copilot OneNote Windows rohkem… Seadmed Surface Arvuti tarvikud Xbox Arvutimängud HoloLens Surface Hub Riistvara garantiid Konto ja arveldamine konto Microsoft Store ja arveldamine Ressursid Mis on uut? Kogukonnafoorumid Microsoft 365 administraatorid Väikeettevõtete portaal Arendaja Haridus Teatage tehnilise toega seotud pettusest Tooteohutus Rohkem Osta Microsoft 365 Kogu Microsoft Global Microsoft 365 Teams Copilot Windows Surface Xbox Tugi Tarkvara Tarkvara Windowsi rakendused AI OneDrive Outlook Üleminek Skype'ilt Teamsile OneNote Microsoft Teams Arvutid ja seadmed Arvutid ja seadmed Accessories Meelelahutus Meelelahutus PC-mängud Äri Äri Microsofti turve Azure Dynamics 365 Microsoft 365 ettevõtteversioon Microsoft Industry Microsoft Power Platform Windows 365 Arendaja ja IT Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Visual Studio Marketplace Rewards Muud Muud Tasuta allalaadimised ja turve Kuva saidikaart Otsing Spikri otsing Tulemid puuduvad Loobu Logi sisse Logige sisse Microsofti kontoga Logige sisse või looge konto. Tere! Valige mõni muu konto. Teil on mitu kontot Valige konto, millega soovite sisse logida. Seotud teemad Windowsi turve, ohutus ja privaatsus Overview Turbe, ohutuse ja privaatsuse ülevaade Windowsi turve Windowsi turbe kasutajaabi Windowsi turve tagab kaitse Enne Xboxi või Windowsi arvuti müümist, kinkimist või taaskasutusse andmist Ründevara eemaldamine Windowsi arvutist Windowsi ohutus Windowsi ohutuse kasutajaabi Brauseriajaloo kuvamine ja kustutamine Microsoft Edge’is Küpsiste kustutamine ja haldamine Windowsi uuesti installimisel saate väärtusliku sisu ohutult eemaldada Kaotsiläinud Windowsi seadme leidmine ja lukustamine Windowsi privaatsus Windowsi privaatsuse kasutajaabi Rakenduste kasutatavad Windowsi privaatsussätted Andmete vaatamine privaatsussätete armatuurlaual Windowsi turbega seotud abi Rakenduskoht Windows 11 Windows 10 Me oleme sellele mõelnud. Ärge sattuge võrgupettuste ja -rünnakute ohvriks, kui ostlete veebis, loete oma meile või sirvite veebi. Meie põhjalike kaitselahenduste abil saate olla kaitstud ja kaitstud. Petturid ja ründajad Ära lase petturidel ja ründajatel sind valve alt ära tabada. Kaitske end veebipettuste ja rünnakute eest , õppides neid meie ekspertide juhiste abil märkama. Paroolid Teie paroolid on teie veebielu võtmed. Meie ekspertide nõuannete abil veenduge, et need oleksid turvalised ja turvalised. Alates keerukast parooli loomisest kuni kahe teguri autentimisteni oleme teieni jõudnud. Windowsi turve Kaitske oma seadet ja andmeid Windowsi turve rakendusega – Windowsi sisseehitatud funktsioon. Windowsi turve abil saate nautida tipptasemel tehnoloogiat, mis kaitseb teie seadet ja andmeid uusimate ohtude ja rünnakute eest. TELLIGE RSS-KANALID Kas vajate veel abi? Kas soovite rohkem valikuvariante? Tutvustus Kogukonnafoorum Siin saate tutvuda tellimusega kaasnevate eelistega, sirvida koolituskursusi, õppida seadet kaitsma ja teha veel palju muud. Microsoft 365 tellimuse eelised Microsoft 365 koolitus Microsofti turbeteenus Hõlbustuskeskus Kogukonnad aitavad teil küsimusi esitada ja neile vastuseid saada, anda tagasisidet ja saada nõu rikkalike teadmistega asjatundjatelt. Nõu küsimine Microsofti kogukonnafoorumis Microsofti spetsialistide kogukonnafoorum Windows Insideri programmis osalejad Microsoft 365 Insideri programmis osalejad Kas sellest teabest oli abi? Jah Ei Aitäh! Veel tagasisidet Microsoftile? Kas saaksite aidata meil teenust paremaks muuta? (Saatke Microsoftile tagasisidet, et saaksime aidata.) Kui rahul te keelekvaliteediga olete? Mis mõjutas teie hinnangut? Leidsin oma probleemile lahenduse Juhised olid selged Tekstist oli lihtne aru saada Tekstis pole žargooni Piltidest oli abi Tõlkekvaliteet Tekst ei vastanud minu ekraanipildile Valed juhised Liiga tehniline Pole piisavalt teavet Pole piisavalt pilte Tõlkekvaliteet Kas soovite anda veel tagasisidet? (Valikuline) Saada tagasiside Kui klõpsate nuppu Edasta, kasutatakse teie tagasisidet Microsofti toodete ja teenuste täiustamiseks. IT-administraator saab neid andmeid koguda. Privaatsusavaldus. Täname tagasiside eest! × Mis on uut? Copilot organisatsioonidele Copilot isiklikuks kasutuseks Microsoft 365 Windows 11 rakendused Microsoft Store Konto profiil Allalaadimiskeskus Tagastused Tellimuse jälgimine Ringlussevõtt Commercial Warranties Haridus Microsoft Education Haridusseadmed Microsoft Teams haridusasutustele Microsoft 365 Education Office Education Haridustöötajate koolitus ja arendus Pakkumised õpilastele ja vanematele Azure õpilastele Äri Microsofti turve Azure Dynamics 365 Microsoft 365 Microsoft Advertising Microsoft 365 Copilot Microsoft Teamsi jaoks Arendaja ja IT Microsofti arendaja Microsoft Learn Tehisintellekti-turuplatsi rakenduste tugi Microsofti tehnoloogiakogukond Microsoft Marketplace Microsoft Power Platform Marketplace Rewards Visual Studio Ettevõte Töökohad Teave Microsofti kohta Privaatsus Microsoftis Investorid Jätkusuutlikkus Eesti (Eesti) Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Teie privaatsusvalikutest loobumise ikoon Teie privaatsusvalikud Tarbijaseisundi privaatsus Võtke Microsoftiga ühendust Privaatsus Halda küpsiseid Kasutustingimused Kaubamärgid Reklaamide kohta EU Compliance DoCs © Microsoft 2026 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/getting-started-secure-static-website-cloudformation-template.html | Get started with a secure static website - Amazon CloudFront Get started with a secure static website - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Solution overview Deploy the solution Get started with a secure static website You can get started with Amazon CloudFront by using the solution described in this topic to create a secure static website for your domain name. A static website uses only static files—like HTML, CSS, JavaScript, images, and videos—and doesn’t need servers or server-side processing. With this solution, your website gets the following benefits: Uses the durable storage of Amazon Simple Storage Service (Amazon S3) – This solution creates an Amazon S3 bucket to host your static website’s content. To update your website, just upload your new files to the S3 bucket. Is sped up by the Amazon CloudFront content delivery network – This solution creates a CloudFront distribution to serve your website to viewers with low latency. The distribution is configured with origin access control (OAC) to make sure that the website is accessible only through CloudFront, not directly from S3. Is secured by HTTPS and security headers – This solution creates an SSL/TLS certificate in AWS Certificate Manager (ACM) , and attaches it to the CloudFront distribution. This certificate enables the distribution to serve your domain’s website securely with HTTPS. Is configured and deployed with AWS CloudFormation – This solution uses an CloudFormation template to set up all the components, so you can focus more on your website’s content and less on configuring components. This solution is open source on GitHub. To view the code, submit a pull request, or open an issue, go to https://github.com/aws-samples/amazon-cloudfront-secure-static-site . Topics Solution overview Deploy the solution Solution overview The following diagram shows an overview of how this static website solution works: The viewer requests the website at www.example.com. If the requested object is cached, CloudFront returns the object from its cache to the viewer. If the object is not in the CloudFront cache, CloudFront requests the object from the origin (an S3 bucket). S3 returns the object to CloudFront. CloudFront caches the object. The objects is returned to the viewer. Subsequent requests for the object that come to the same CloudFront edge location are served from the CloudFront cache. Deploy the solution To deploy this secure static website solution, you can choose from either of the following options: Use the CloudFormation console to deploy the solution with default content, then upload your website content to Amazon S3. Clone the solution to your computer to add your website content. Then, deploy the solution with the AWS Command Line Interface (AWS CLI). Note You must use the US East (N. Virginia) Region to deploy the CloudFormation template. Topics Prerequisites Use the CloudFormation console Clone the solution locally Finding access logs Prerequisites To use this solution, you must have the following prerequisites: A registered domain name, such as example.com, that’s pointed to an Amazon Route 53 hosted zone. The hosted zone must be in the same AWS account where you deploy this solution. If you don’t have a registered domain name, you can register one with Route 53 . If you have a registered domain name but it’s not pointed to a Route 53 hosted zone, configure Route 53 as your DNS service . AWS Identity and Access Management (IAM) permissions to launch CloudFormation templates that create IAM roles, and permissions to create all the AWS resources in the solution. For more information, see Controlling access with AWS Identity and Access Management in the AWS CloudFormation User Guide . You are responsible for the costs incurred while using this solution. For more information about costs, see the pricing pages for each AWS service . Use the CloudFormation console To deploy using the CloudFormation console Launch this solution in the CloudFormation console . If necessary, sign in to your AWS account. The Create stack wizard opens in the CloudFormation console, with prepopulated fields that specify this solution’s CloudFormation template. At the bottom of the page, choose Next . On the Specify stack details page, enter values for the following fields: SubDomain – Enter the subdomain to use for your website. For example, if the subdomain is www , your website is available at www.example.com. (Replace example.com with your domain name, as explained in the following bullet.) DomainName – Enter your domain name, such as example.com . This domain must be pointed to a Route 53 hosted zone. HostedZoneId – The Route 53 hosted zone ID of your domain name. CreateApex – (Optional) Create an alias to the domain apex (example.com) in your CloudFront configuration. When finished, choose Next . (Optional) On the Configure stack options page, add tags and other stack options . When finished, choose Next . On the Review page, scroll to the bottom of the page, then select the two boxes in the Capabilities section. These capabilities allow CloudFormation to create an IAM role that allows access to the stack’s resources, and to name the resources dynamically. Choose Create stack . Wait for the stack to finish creating. The stack creates some nested stacks, and can take several minutes to finish. When it’s finished, the Status changes to CREATE_COMPLETE . When the status is CREATE_COMPLETE , go to https:// www.example.com to view your website (replace www.example.com with the subdomain and domain name that you specified in step 3). You should see the website’s default content: To replace the website’s default content with your own Open the Amazon S3 console at https://console.aws.amazon.com/s3/ . Choose the bucket whose name begins with amazon-cloudfront-secure-static-site-s3bucketroot- . Note Make sure to choose the bucket with s3bucketroot in its name, not s3bucketlogs . The bucket with s3bucketroot in its name contains the website content. The one with s3bucketlogs contains only log files. Delete the website’s default content, then upload your own. Note If you viewed your website with this solution’s default content, then it’s likely that some of the default content is cached in a CloudFront edge location. To make sure that viewers see your updated website content, invalidate the files to remove the cached copies from CloudFront edge locations. For more information, see Invalidate files to remove content . Clone the solution locally Prerequisites To add your website content before deploying this solution, you must package the solution’s artifacts locally, which requires Node.js and npm. For more information, see https://www.npmjs.com/get-npm . To add your website content and deploy the solution Clone or download the solution from https://github.com/aws-samples/amazon-cloudfront-secure-static-site . After you clone or download it, open a command prompt or terminal and navigate to the amazon-cloudfront-secure-static-site folder. Run the following command to install and package the solution’s artifacts: make package-static Copy your website’s content into the www folder, overwriting the default website content. Run the following AWS CLI command to create an Amazon S3 bucket to store the solution’s artifacts. Replace amzn-s3-demo-bucket-for-artifacts with your own bucket name. aws s3 mb s3:// amzn-s3-demo-bucket-for-artifacts --region us-east-1 Run the following AWS CLI command to package the solution’s artifacts as a CloudFormation template. Replace amzn-s3-demo-bucket-for-artifacts with the name of the bucket that you created in the previous step. aws cloudformation package \ --region us-east-1 \ --template-file templates/main.yaml \ --s3-bucket amzn-s3-demo-bucket-for-artifacts \ --output-template-file packaged.template Run the following command to deploy the solution with CloudFormation, replacing the following values: your-CloudFormation-stack-name – Replace with a name for the CloudFormation stack. example.com – Replace with your domain name. This domain must be pointed to a Route 53 hosted zone in the same AWS account. www – Replace with the subdomain to use for your website. For example, if the subdomain is www , your website is available at www.example.com. hosted-zone-ID – Replace with the Route 53 hosted zone ID of your domain name. aws cloudformation deploy \ --region us-east-1 \ --stack-name your-CloudFormation-stack-name \ --template-file packaged.template \ --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \ --parameter-overrides DomainName= example.com SubDomain= www HostedZoneId= hosted-zone-ID (Optional) To deploy the stack with a domain apex, run the following command instead. aws --region us-east-1 cloudformation deploy \ --stack-name your-CloudFormation-stack-name \ --template-file packaged.template \ --capabilities CAPABILITY_NAMED_IAM CAPABILITY_AUTO_EXPAND \ --parameter-overrides DomainName= example.com SubDomain= www HostedZoneId= hosted-zone-ID CreateApex=yes Wait for the CloudFormation stack to finish creating. The stack creates some nested stacks, and can take several minutes to finish. When it’s finished, the Status changes to CREATE_COMPLETE . When the status changes to CREATE_COMPLETE , go to https://www.example.com to view your website (replace www.example.com with the subdomain and domain name that you specified in the previous step). You should see your website’s content. Finding access logs This solution enables access logs for the CloudFront distribution. Complete the following steps to locate the distribution’s access logs. To locate the distribution’s access logs Open the Amazon S3 console at https://console.aws.amazon.com/s3/ . Choose the bucket whose name begins with amazon-cloudfront-secure-static-site-s3bucketlogs- . Note Make sure to choose the bucket with s3bucketlogs in its name, not s3bucketroot . The bucket with s3bucketlogs in its name contains log files. The one with s3bucketroot contains the website content. The folder named cdn contains the CloudFront access logs. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Get started (AWS CLI) CloudFront flat-rate pricing plans Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | Introducción a CloudFront - Amazon CloudFront Introducción a CloudFront - Amazon CloudFront Documentación Amazon CloudFront Guía para desarrolladores Introducción a CloudFront Los temas de esta sección muestran cómo comenzar a entregar su contenido con Amazon CloudFront. El tema Configuración de su Cuenta de AWS describe los requisitos previos de los siguientes tutoriales, como la creación de una Cuenta de AWS y la creación de un usuario con acceso administrativo. El tutorial de distribución básica muestra cómo configurar el control de acceso de origen (OAC) para enviar solicitudes autenticadas a un origen de Amazon S3. El tutorial de sitios web estáticos seguros muestra cómo crear un sitio web estático seguro para su nombre de dominio utilizando OAC con un origen de Amazon S3. El tutorial utiliza una plantilla de Amazon CloudFront (CloudFront) para la configuración y la implementación. Temas Configuración de su Cuenta de AWS Introducción a una distribución estándar de CloudFront Introducción a una distribución estándar (AWS CLI) Introducción a un sitio web seguro estático JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento Cómo utilizar los AWS SDK Configuración de su Cuenta de AWS ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/zh_cn/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | CloudFront 入门 - Amazon CloudFront CloudFront 入门 - Amazon CloudFront 文档 Amazon CloudFront 开发人员指南 CloudFront 入门 此部分中的主题向您展示如何开始通过 Amazon CloudFront 交付内容。 设置您的 AWS 账户 主题介绍了以下教程的先决条件,例如创建AWS 账户和创建具有管理权限的用户。 基础分配教程将向您演示如何设置源访问控制(OAC,Origin Access Control),以便将经过身份验证的请求发送到 Amazon S3 源。 安全静态网站教程向您演示如使用 OAC 和 Amazon S3 源,为您的域名创建安全静态网站。本教程使用 Amazon CloudFront(CloudFront)模板进行配置和部署。 主题 设置您的 AWS 账户 开始使用 CloudFront 标准分配 开始使用标准分配(AWS CLI) 安全静态网站入门 Javascript 在您的浏览器中被禁用或不可用。 要使用 Amazon Web Services 文档,必须启用 Javascript。请参阅浏览器的帮助页面以了解相关说明。 文档惯例 使用 AWS SDK 设置您的 AWS 账户 此页面对您有帮助吗?- 是 感谢您对我们工作的肯定! 如果不耽误您的时间,请告诉我们做得好的地方,让我们做得更好。 此页面对您有帮助吗?- 否 感谢您告诉我们本页内容还需要完善。很抱歉让您失望了。 如果不耽误您的时间,请告诉我们如何改进文档。 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/fr_fr/AmazonS3/latest/userguide/bucketnamingrules.html | Règles de dénomination des compartiments à usage général - Amazon Simple Storage Service Règles de dénomination des compartiments à usage général - Amazon Simple Storage Service Documentation Amazon Simple Storage Service (S3) Guide de l’utilisateur Règles de dénomination des compartiments à usage général Exemples de noms de compartiments à usage général Bonnes pratiques Création d’un compartiment qui utilise un GUID dans son nom Règles de dénomination des compartiments à usage général Lorsque vous créez un compartiment à usage général, veillez à tenir compte de la longueur, des caractères valides, du formatage et du caractère unique des noms de compartiment. Les sections suivantes fournissent des informations sur la dénomination des compartiments à usage général, notamment les règles de dénomination, les bonnes pratiques et un exemple de création d’un compartiment à usage général avec un nom incluant un identifiant unique global (GUID). Pour en savoir plus sur les noms des clés d’objet, consultez Création de noms de clés d’objet . Pour créer un compartiment à usage général, consultez Création d’un compartiment à usage général . Rubriques Règles de dénomination des compartiments à usage général Exemples de noms de compartiments à usage général Bonnes pratiques Création d’un compartiment qui utilise un GUID dans son nom Règles de dénomination des compartiments à usage général Les règles de dénomination suivantes s’appliquent aux compartiments à usage général. Les noms de compartiment peuvent comporter entre 3 (min.) et 63 (max.) caractères. Les noms de compartiment ne peuvent être composés que de lettres minuscules, de chiffres, de points ( . ) et de traits d’union ( - ). Les noms de compartiment doivent commencer et se terminer par une lettre ou un chiffre. Les noms de compartiment ne doivent pas contenir deux points consécutifs. Les noms de compartiments ne doivent pas utiliser le même format que les adresses IP (par exemple, 192.168.5.4 ). Les noms de compartiment ne doivent pas commencer par le préfixe xn-- . Les noms de compartiment ne doivent pas commencer par le préfixe sthree- . Les noms de compartiment ne doivent pas commencer par le préfixe amzn-s3-demo- . Les noms de compartiment ne doivent pas se terminer par le suffixe -s3alias . Ce suffixe est réservé aux noms d’alias de point d’accès. Pour plus d’informations, consultez Alias de point d'accès . Les noms de compartiment ne doivent pas se terminer par le suffixe --ol-s3 . Ce suffixe est réservé aux noms d’alias de point d’accès Object Lambda. Pour plus d’informations, consultez Comment utiliser un alias de type compartiment pour votre point d’accès Object Lambda de compartiment S3 . Les noms de compartiment ne doivent pas se terminer par le suffixe .mrap . Ce suffixe est réservé aux noms de point d’accès multi-régions. Pour plus d’informations, consultez Règles relatives à l'attribution de noms pour les points d'accès multi-Régions Amazon S3 . Les noms de compartiment ne doivent pas se terminer par le suffixe --x-s3 . Ce suffixe est réservé aux compartiments de répertoires. Pour plus d’informations, consultez Règles de dénomination des compartiments de répertoires . Les noms de compartiment ne doivent pas se terminer par le suffixe --table-s3 . Ce suffixe est réservé aux compartiments de table S3. Pour plus d’informations, consultez Règles de dénomination des compartiments de tables, des tables et des espaces de noms Amazon S3 . Les noms des compartiments utilisés avec Amazon S3 Transfer Acceleration ne peuvent pas comporter de points ( . ). Pour plus d’informations sur Transfer Acceleration, consultez Configuration de transferts de fichiers rapides et sécurisés à l’aide d’Amazon S3 Transfer Acceleration . Important Les noms de compartiment doivent être uniques au sein de tous les Comptes AWS de toutes les Régions AWS d’une partition. Une partition est un regroupement de régions. AWS dispose actuellement de trois partitions : aws (régions commerciales), aws-cn (régions Chine) et aws-us-gov (AWS GovCloud (US)). Un nom de compartiment ne peut pas être utilisé par un autre Compte AWS de la même partition tant que le compartiment n’a pas été supprimé. Une fois que vous avez supprimé un compartiment, sachez qu’un autre Compte AWS de la même partition peut attribuer le nom de ce compartiment à un nouveau compartiment, qui peut donc potentiellement recevoir des demandes destinées au compartiment supprimé. Pour l’éviter, ou si vous souhaitez continuer à utiliser ce nom de compartiment, ne supprimez pas le compartiment. Nous vous recommandons de vider et de conserver le compartiment, et de bloquer toutes les demandes adressées au compartiment si nécessaire. Nous vous recommandons de vider les compartiments qui ne sont plus actifs de tous leurs objets afin de réduire vos coûts et de conserver les compartiments. Lorsque vous créez un compartiment à usage général, vous choisissez son nom et la Région AWS dans laquelle le créer. Une fois le compartiment à usage général créé, vous ne pouvez plus changer son nom ni sa région. N’incluez pas d’informations sensibles dans le nom du compartiment. Le nom de compartiment est visible dans les URL qui pointent vers les objets du compartiment. Note Avant le 1er mars 2018, les compartiments créés dans la région USA Est (Virginie du Nord) pouvaient comporter des noms incluant jusqu’à 255 caractères et comprenant des lettres majuscules et des traits de soulignement. À compter du 1er mars 2018, les nouveaux compartiments de la région USA Est (Virginie du Nord) doivent être conformes aux règles appliquées dans toutes les autres régions. Exemples de noms de compartiments à usage général Les exemples suivants montrent les caractères autorisés dans les noms de compartiment à usage général : a-z, 0-9 et trait d’union ( - ). Le préfixe réservé amzn-s3-demo- n’est utilisé ici qu’à titre d’illustration. Étant donné qu’il s’agit d’un préfixe réservé, vous ne pouvez pas créer de noms de compartiment commençant par amzn-s3-demo- . amzn-s3-demo-bucket1-a1b2c3d4-5678-90ab-cdef-example11111 amzn-s3-demo-bucket Les exemples de nom de compartiment ci-dessous sont valides, mais ils ne sont pas recommandés pour un usage autre que l’hébergement de sites web statiques, car ils contiennent des points ( . ) : example.com www.example.com my.example.s3.bucket Les exemples de noms de compartiment suivants ne sont pas valides : amzn_s3_demo_bucket (contient des traits de soulignement) AmznS3DemoBucket (contient des lettres majuscules) amzn-s3-demo-bucket- (commence par le préfixe amzn-s3-demo- et se termine par un trait d’union) example..com (comprend deux points consécutifs) 192.168.5.4 (correspond au format d’une adresse IP) Bonnes pratiques Lorsque vous nommez vos compartiments à usage général, tenez compte des bonnes pratiques de dénomination suivantes. Choisir un schéma de dénomination de compartiment qui ne risque pas d’entraîner de conflits de nom Si votre application crée les compartiments automatiquement, pensez à choisir un schéma de dénomination qui ne risque pas d’entraîner des conflits de nom. Vous devez veiller à ce que la logique applicative sélectionne un autre nom lorsqu’un nom de compartiment est déjà utilisé. Ajouter des identifiants uniques globaux (GUID) aux noms de compartiments Nous vous recommandons de créer des noms de compartiment impossibles à deviner. N’écrivez pas de code en supposant que le nom du compartiment que vous avez choisi est disponible, sauf si vous avez déjà créé ce compartiment. Une méthode pour créer des noms de compartiment impossibles à deviner consiste à ajouter un identificateur unique global (GUID) au nom de votre compartiment, par exemple, amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111 . Pour plus d’informations, consultez Création d’un compartiment qui utilise un GUID dans son nom . Ne pas utiliser pas de points ( . ) dans les noms des compartiments Pour une meilleure compatibilité, nous vous déconseillons d’utiliser des points ( . ) dans les noms des compartiments, à l’exception des compartiments utilisés uniquement à des fins d’hébergement de sites web statiques. Le tableau ci-dessous répertorie les sous-ressources qui vous permettent de gérer les configurations associées à un compartiment donné. Les certificats de sécurité utilisés pour l’hébergement virtuel de compartiments ne sont pas compatibles avec les compartiments dont le nom contient un point. Cette limitation n’affecte pas les compartiments utilisés pour l’hébergement de sites web statiques, car ce type d’hébergement n’est disponible que sur HTTP. Pour en savoir plus sur l’adressage de type hébergement virtuel, consultez Hébergement virtuel des compartiments à usage général . Pour plus d’informations sur l’hébergement de sites web statiques, consultez Hébergement d’un site Web statique à l’aide d’Amazon S3 . Choisir un nom pertinent Lorsque vous nommez un compartiment, nous vous recommandons de choisir un nom pertinent pour vous ou votre entreprise. Évitez d’utiliser des noms associés à d’autres. Par exemple, évitez d’utiliser AWS ou Amazon dans le nom de votre compartiment. Ne pas supprimer les compartiments pour pouvoir réutiliser leur nom Si un compartiment est vide, vous pouvez le supprimer. Une fois qu’un compartiment a été supprimé, son nom peut être réutilisé. Cependant, rien ne garantit que vous pourrez réutiliser ce nom immédiatement, ni même ultérieurement. Après avoir supprimé un compartiment, un certain temps peut s’écouler avant que vous ne puissiez réutiliser son nom. En outre, un autre Compte AWS peut créer un compartiment portant le même nom avant que vous ne réutilisiez le nom. Une fois que vous avez supprimé un compartiment à usage général, sachez qu’un autre Compte AWS de la même partition peut attribuer le nom de ce compartiment à un nouveau compartiment, qui peut donc potentiellement recevoir des demandes destinées au compartiment à usage général supprimé. Pour l’éviter, ou si vous souhaitez continuer à utiliser le nom de ce compartiment à usage général, ne supprimez pas le compartiment. Nous vous recommandons de vider et de conserver le compartiment, et de bloquer toutes les demandes adressées au compartiment si nécessaire. Création d’un compartiment qui utilise un GUID dans son nom Les exemples suivants vous montrent comment créer un compartiment à usage général incluant un GUID à la fin de son nom. L’exemple AWS CLI suivant crée un compartiment à usage général dans la région USA Ouest (Californie du Nord) ( us-west-1 ) avec un exemple de nom de compartiment utilisant un identifiant unique global (GUID) à l’aide de l’. Pour utiliser cet exemple de commande, remplacez user input placeholders par vos propres informations. aws s3api create-bucket \ --bucket amzn-s3-demo-bucket1 $(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \ --region us-west-1 \ --create-bucket-configuration LocationConstraint= us-west-1 L’exemple suivant montre comment créer un compartiment incluant un GUID à la fin de son nom dans la région USA Est (Virginie du Nord) ( us-east-1 ) à l’aide du kit AWS SDK pour Java. Pour utiliser cet exemple, remplacez user input placeholders par vos propres informations. Pour plus d’informations sur l’utilisation des autres kits AWS SDK, consultez Outils pour créer sur AWS . import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.Bucket; import com.amazonaws.services.s3.model.CreateBucketRequest; import java.util.List; import java.util.UUID; public class CreateBucketWithUUID { public static void main(String[] args) { final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions. US_EAST_1 ).build(); String bucketName = " amzn-s3-demo-bucket " + UUID.randomUUID().toString().replace("-", ""); CreateBucketRequest createRequest = new CreateBucketRequest(bucketName); System.out.println(bucketName); s3.createBucket(createRequest); } } JavaScript est désactivé ou n'est pas disponible dans votre navigateur. Pour que vous puissiez utiliser la documentation AWS, Javascript doit être activé. Vous trouverez des instructions sur les pages d'aide de votre navigateur. Conventions de rédaction Modèles de compartiments courants Quotas, restrictions et limitations Cette page vous a-t-elle été utile ? - Oui Merci de nous avoir fait part de votre satisfaction. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer ce qui vous a plu afin que nous puissions nous améliorer davantage. Cette page vous a-t-elle été utile ? - Non Merci de nous avoir avertis que cette page avait besoin d'être retravaillée. Nous sommes désolés de ne pas avoir répondu à vos attentes. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer comment nous pourrions améliorer cette documentation. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/de_de/cli/latest/userguide/getting-started-install.html | Installation oder Aktualisierung der neuesten Version von AWS CLI. - AWS Command Line Interface Installation oder Aktualisierung der neuesten Version von AWS CLI. - AWS Command Line Interface Dokumentation AWS Command Line Interface Benutzerhandbuch für Version 2 Anweisungen zur Installation und Aktualisierung der AWS CLI Beheben von Fehlern beim Installieren und Deinstallieren der AWS CLI Nächste Schritte Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Installation oder Aktualisierung der neuesten Version von AWS CLI. In diesem Thema wird beschrieben, wie Sie die neueste Version der AWS Command Line Interface(AWS CLI) auf unterstützten Betriebssystemen installieren oder aktualisieren. Weitere Informationen zu den neuesten Versionen von AWS CLI finden Sie im Änderungsprotokoll zu AWS CLI Version 2 auf GitHub. Informationen zur Installation einer früheren Version der AWS CLI finden Sie unter Installieren früherer Versionen von AWS CLI Version 2 . Informationen zum Deinstallieren finden Sie unter So deinstallieren Sie die AWS CLI Version 2 . Wichtig Die AWS CLI-Versionen 1 und 2 verwenden denselben aws -Befehlsnamen. Wenn Sie die AWS CLI Version 1 zuvor installiert haben, informieren Sie sich unter Migrationsleitfaden für die AWS CLI Version 2 . Themen Anweisungen zur Installation und Aktualisierung der AWS CLI Beheben von Fehlern beim Installieren und Deinstallieren der AWS CLI Nächste Schritte Anweisungen zur Installation und Aktualisierung der AWS CLI Die Installationsanweisungen finden Sie im Abschnitt für Ihr Betriebssystem. Voraussetzungen für die Installation und Aktualisierung Sie müssen das heruntergeladene Paket extrahieren oder „entpacken“ können. Wenn Ihr Betriebssystem nicht über den integrierten unzip -Befehl verfügt, verwenden Sie ein Äquivalent. Die AWS CLI verwendet glibc , groff und less . Diese sind standardmäßig in den meisten großen Linux-Distributionen enthalten. Wir unterstützen die AWS CLI auf 64-Bit-Versionen aktueller Distributionen von CentOS, Fedora, Ubuntu, Amazon Linux 1, Amazon Linux 2, Amazon Linux 2023 und Linux ARM. Da AWS keine Repositorys außer snap von Drittanbietern verwaltet, können wir nicht garantieren, dass sie die neueste Version von AWS CLI enthalten. AWS CLI installieren oder aktualisieren Warnung Wenn Sie zum ersten Mal auf Amazon Linux aktualisieren, müssen Sie zur Installation der neuesten Version der AWS CLI die vorinstallierte yum -Version mit dem folgenden Befehl deinstallieren: $ sudo yum remove awscli Nachdem die yum -Installation der AWS CLI deinstalliert wurde, folgen Sie den nachstehenden Linux-Installationsanweisungen. Sie können den AWS CLI mit einer der folgenden Methoden installieren: Das Befehlszeilen-Installationsprogramm ist eine gute Option für die Versionskontrolle, da Sie die zu installierende Version angeben können. Diese Option wird nicht automatisch aktualisiert, und Sie müssen bei jedem Update ein neues Installationsprogramm herunterladen, um die frühere Version zu überschreiben. Das offiziell unterstützte snap -Paket ist eine gute Option, um immer die neueste Version von AWS CLI zu haben, da Snap-Pakete automatisch aktualisiert werden. Es gibt keine integrierte Unterstützung für die Auswahl von AWS CLI-Nebenversionen, daher ist dies keine optimale Installationsmethode, wenn Ihr Team Versionen anheften muss. Command line installer - Linux x86 (64-bit) Um Ihre aktuelle Installation der AWS CLI zu aktualisieren, laden Sie bei jedem Update ein neues Installationsprogramm herunter, damit frühere Versionen überschrieben werden. Führen Sie die folgenden Schritte über die Befehlszeile aus, um die AWS CLI unter Linux zu installieren. Im folgenden finden Sie schnelle Installationsschritte in einer einzigen Gruppe zum Kopieren und Einfügen, die eine einfache Installation ermöglichen. Weitere Anleitungen finden Sie in den folgenden Schritten. Anmerkung (Optional) Mit dem folgenden Befehlsblock wird die AWS CLIheruntergeladen und installiert, ohne zuvor die Integrität Ihres Downloads zu überprüfen. Gehen Sie wie folgt vor, um die Integrität Ihres Downloads zu überprüfen. Führen Sie zur Installation der AWS CLI die folgenden Befehle aus. $ curl " https://awscli.amazonaws.com/ awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install Fügen Sie zur Aktualisierung Ihrer aktuellen Installation der AWS CLI Ihre vorhandenen Symlink- und Installationsinformationen hinzu, um den install -Befehl mit den Parametern --bin-dir , --install-dir und --update zu konstruieren. Der folgende Befehlsblock verwendet den Beispiel-Symlink /usr/local/bin und den Beispielspeicherort /usr/local/aws-cli für das Installationsprogramm, um AWS CLI lokal für den aktuellen Benutzer zu installieren. $ curl " https://awscli.amazonaws.com/ awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update Geführte Installationsschritte Laden Sie die Installationsdatei auf eine der folgenden Arten herunter: Mit dem Befehl curl – Die -o -Option gibt den Namen der Datei an, in die das heruntergeladene Paket geschrieben wird. Aufgrund der Optionen im folgenden Beispielbefehl wird die heruntergeladene Datei in das aktuelle Verzeichnis mit dem lokalen Namen awscliv2.zip geschrieben. $ curl " https://awscli.amazonaws.com/ awscli-exe-linux-x86_64.zip" -o "awscliv2.zip" Download über die URL – Verwenden Sie eine der folgenden URLs, um das Installationsprogramm mit Ihrem Browser herunterzuladen: https://awscli.amazonaws.com/awscli-exe-linux-x86_64.zip (Optional) Überprüfen der Integrität Ihrer heruntergeladenen Zip-Datei Wenn Sie den manuellen Download des AWS CLI-Installer-Pakets .zip in den obigen Schritten ausgewählt haben, können Sie die folgenden Schritte verwenden, um die Signaturen mithilfe des GnuPG -Tools zu überprüfen. Die .zip -Dateien des AWS CLI-Installationsprogrammpakets sind kryptografisch mit PGP-Signaturen signiert. Wenn die Dateien beschädigt oder verändert wurden, schlägt diese Verifizierung fehl und Sie sollten nicht mit der Installation fortfahren. Laden Sie den gpg -Befehl herunter und installieren Sie diesen mit Ihrem Paket-Manager. Weitere Informationen zu GnuPG finden Sie auf der GnuPG-Website . Um die öffentliche Schlüsseldatei zu erstellen, müssen Sie eine Textdatei erstellen und den folgenden Text einfügen. -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBF2Cr7UBEADJZHcgusOJl7ENSyumXh85z0TRV0xJorM2B/JL0kHOyigQluUG ZMLhENaG0bYatdrKP+3H91lvK050pXwnO/R7fB/FSTouki4ciIx5OuLlnJZIxSzx PqGl0mkxImLNbGWoi6Lto0LYxqHN2iQtzlwTVmq9733zd3XfcXrZ3+LblHAgEt5G TfNxEKJ8soPLyWmwDH6HWCnjZ/aIQRBTIQ05uVeEoYxSh6wOai7ss/KveoSNBbYz gbdzoqI2Y8cgH2nbfgp3DSasaLZEdCSsIsK1u05CinE7k2qZ7KgKAUIcT/cR/grk C6VwsnDU0OUCideXcQ8WeHutqvgZH1JgKDbznoIzeQHJD238GEu+eKhRHcz8/jeG 94zkcgJOz3KbZGYMiTh277Fvj9zzvZsbMBCedV1BTg3TqgvdX4bdkhf5cH+7NtWO lrFj6UwAsGukBTAOxC0l/dnSmZhJ7Z1KmEWilro/gOrjtOxqRQutlIqG22TaqoPG fYVN+en3Zwbt97kcgZDwqbuykNt64oZWc4XKCa3mprEGC3IbJTBFqglXmZ7l9ywG EEUJYOlb2XrSuPWml39beWdKM8kzr1OjnlOm6+lpTRCBfo0wa9F8YZRhHPAkwKkX XDeOGpWRj4ohOx0d2GWkyV5xyN14p2tQOCdOODmz80yUTgRpPVQUtOEhXQARAQAB tCFBV1MgQ0xJIFRlYW0gPGF3cy1jbGlAYW1hem9uLmNvbT6JAlQEEwEIAD4CGwMF CwkIBwIGFQoJCAsCBBYCAwECHgECF4AWIQT7Xbd/1cEYuAURraimMQrMRnJHXAUC aGveYQUJDMpiLAAKCRCmMQrMRnJHXKBYD/9Ab0qQdGiO5hObchG8xh8Rpb4Mjyf6 0JrVo6m8GNjNj6BHkSc8fuTQJ/FaEhaQxj3pjZ3GXPrXjIIVChmICLlFuRXYzrXc Pw0lniybypsZEVai5kO0tCNBCCFuMN9RsmmRG8mf7lC4FSTbUDmxG/QlYK+0IV/l uJkzxWa+rySkdpm0JdqumjegNRgObdXHAQDWlubWQHWyZyIQ2B4U7AxqSpcdJp6I S4Zds4wVLd1WE5pquYQ8vS2cNlDm4QNg8wTj58e3lKN47hXHMIb6CHxRnb947oJa pg189LLPR5koh+EorNkA1wu5mAJtJvy5YMsppy2y/kIjp3lyY6AmPT1posgGk70Z CmToEZ5rbd7ARExtlh76A0cabMDFlEHDIK8RNUOSRr7L64+KxOUegKBfQHb9dADY qqiKqpCbKgvtWlds909Ms74JBgr2KwZCSY1HaOxnIr4CY43QRqAq5YHOay/mU+6w hhmdF18vpyK0vfkvvGresWtSXbag7Hkt3XjaEw76BzxQH21EBDqU8WJVjHgU6ru+ DJTs+SxgJbaT3hb/vyjlw0lK+hFfhWKRwgOXH8vqducF95NRSUxtS4fpqxWVaw3Q V2OWSjbne99A5EPEySzryFTKbMGwaTlAwMCwYevt4YT6eb7NmFhTx0Fis4TalUs+ j+c7Kg92pDx2uQ== =OBAt -----END PGP PUBLIC KEY BLOCK----- Als Referenz finden Sie im Folgenden die Details des öffentlichen Schlüssels. Key ID: A6310ACC4672475C Type: RSA Size: 4096/4096 Created: 2019-09-18 Expires: 2026-07-07 User ID: AWS CLI Team <aws-cli@amazon.com> Key fingerprint: FB5D B77F D5C1 18B8 0511 ADA8 A631 0ACC 4672 475C Importieren Sie den öffentlichen AWS CLI-Schlüssel mit dem folgenden Befehl, indem Sie public-key-file-name durch den Dateinamen des erstellten öffentlichen Schlüssels ersetzen. $ gpg --import public-key-file-name gpg: /home/ username /.gnupg/trustdb.gpg: trustdb created gpg: key A6310ACC4672475C: public key "AWS CLI Team <aws-cli@amazon.com>" imported gpg: Total number processed: 1 gpg: imported: 1 Laden Sie die AWS CLI-Signaturdatei für das heruntergeladene Paket herunter. Sie hat denselben Pfad und denselben Namen wie die .zip -Datei, der sie entspricht, hat aber die Erweiterung .sig . In den folgenden Beispielen speichern wir sie im aktuellen Verzeichnis als Datei namens awscliv2.sig . Verwenden Sie für die neueste Version des AWS CLI den folgenden Befehlsblock. $ curl -o awscliv2.sig https://awscli.amazonaws.com/ awscli-exe-linux-x86_64.zip.sig Überprüfen Sie die Signatur und übergeben Sie sowohl den heruntergeladenen .sig - als auch den .zip -Dateinamen als Parameter an den gpg -Befehl. $ gpg --verify awscliv2.sig awscliv2.zip Die Ausgabe sollte in etwa folgendermaßen aussehen: gpg: Signature made Mon Nov 4 19:00:01 2019 PST gpg: using RSA key FB5D B77F D5C1 18B8 0511 ADA8 A631 0ACC 4672 475C gpg: Good signature from "AWS CLI Team <aws-cli@amazon.com>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: FB5D B77F D5C1 18B8 0511 ADA8 A631 0ACC 4672 475C Wichtig Die Warnung in der Ausgabe wird erwartet und ist kein Hinweis auf ein Problem. Sie tritt auf, weil keine Vertrauenskette zwischen Ihrem persönlichen PGP-Schlüssel (falls Sie einen haben) und dem AWS CLI-PGP-Schlüssel besteht. Weitere Informationen finden Sie unter Web of trust (Netz des Vertrauens). Entpacken Sie das Installationsprogramm. Wenn Ihre Linux-Distribution keinen integrierten unzip -Befehl aufweist, verwenden Sie ein Äquivalent, um es zu entpacken. Der folgende Beispielbefehl entpackt das Paket und erstellt ein Verzeichnis mit dem Namen aws im aktuellen Verzeichnis. $ unzip awscliv2.zip Anmerkung Bei einem Update von einer früheren Version fordert der unzip -Befehl zum Überschreiben vorhandener Dateien auf. Um diese Aufforderungen zu überspringen, beispielsweise bei der Skriptautomatisierung, verwenden Sie die Aktualisierungsmarkierung -u für unzip . Diese Markierung sorgt dafür, dass vorhandene Dateien automatisch aktualisiert und bei Bedarf neue erstellt werden. $ unzip -u awscliv2.zip Führen Sie das Installationsprogramm aus. Der Installationsbefehl verwendet eine Datei namens install im neu entpackten aws -Verzeichnis. Standardmäßig werden alle Dateien unter /usr/local/aws-cli installiert und ein symbolischer Link wird in /usr/local/bin erstellt. Der Befehl enthält sudo , um diesen Verzeichnissen Schreibberechtigungen zu erteilen. $ sudo ./aws/install Sie können ohne sudo installieren, wenn Sie Ordner angeben, für die Sie bereits über Schreibberechtigungen verfügen. Verwenden Sie die folgenden Anweisungen für den install -Befehl, um den Installationsort anzugeben: Stellen Sie sicher, dass die Pfade, die Sie zu den Parametern -i und -b angeben, keine Volume- oder Verzeichnisnamen mit Leerstellen oder Leerräumen enthalten. Wenn ein Leerzeichen vorhanden ist, schlägt die Installation fehl. --install-dir oder -i – Diese Option gibt das Verzeichnis an, in den alle Dateien kopiert werden sollen. Der Standardwert ist /usr/local/aws-cli . --bin-dir oder -b – Diese Option gibt an, dass das aws -Hauptprogramm im Installationsordner mit der Datei aws im angegebenen Pfad symbolisch verknüpft ist. Sie müssen über Schreibberechtigungen für das angegebene Verzeichnis verfügen. Wenn Sie einen Symlink zu einem Verzeichnis erstellen, das sich bereits im Pfad befindet, ist es nicht notwendig, das Installationsverzeichnis der $PATH -Variablen des Benutzers hinzuzufügen. Der Standardwert ist /usr/local/bin . $ ./aws/install -i /usr/local/aws-cli -b /usr/local/bin Anmerkung Um die aktuelle Installation von AWS CLI zu aktualisieren, fügen Sie Ihre vorhandenen Symlink- und Installationsinformationen hinzu, um den install -Befehl mit dem --update -Parameter zu konstruieren. $ sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update Gehen Sie wie folgt vor, um den vorhandenen Symlink und das Installationsverzeichnis zu suchen: Verwenden Sie den which -Befehl, um Ihren Symlink zu finden. Dadurch erhalten Sie den Pfad, der mit dem --bin-dir -Parameter verwendet werden soll. $ which aws /usr/local/bin /aws Verwenden Sie den ls -Befehl, um das Verzeichnis zu finden, auf das Ihr Symlink verweist. Dadurch erhalten Sie den Pfad, der mit dem --install-dir -Parameter verwendet werden soll. $ ls -l /usr/local/bin/aws lrwxrwxrwx 1 ec2-user ec2-user 49 Oct 22 09:49 /usr/local/bin/aws -> /usr/local/aws-cli /v2/current/bin/aws Bestätigen Sie die Installation mit dem folgenden Befehl. $ aws --version aws-cli/2.27.41 Python/3.11.6 Linux/5.10.205-195.807.amzn2.x86_64 Wenn der aws -Befehl nicht gefunden wird, müssen Sie möglicherweise Ihr Terminal neu starten oder die Maßnahmen zur Fehlerbehebung unter Behebung von Fehlern für den AWS CLI befolgen. Command line - Linux ARM Um Ihre aktuelle Installation der AWS CLI zu aktualisieren, laden Sie bei jedem Update ein neues Installationsprogramm herunter, damit frühere Versionen überschrieben werden. Führen Sie die folgenden Schritte über die Befehlszeile aus, um die AWS CLI unter Linux zu installieren. Im Folgenden finden Sie schnelle Installationsschritte in einer einzigen Gruppe zum Kopieren und Einfügen, die eine einfache Installation ermöglichen. Weitere Anleitungen finden Sie in den folgenden Schritten. Anmerkung (Optional) Mit dem folgenden Befehlsblock wird die AWS CLIheruntergeladen und installiert, ohne zuvor die Integrität Ihres Downloads zu überprüfen. Gehen Sie wie folgt vor, um die Integrität Ihres Downloads zu überprüfen. Führen Sie zur Installation der AWS CLI die folgenden Befehle aus. $ curl " https://awscli.amazonaws.com/ awscli-exe-linux-aarch64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install Fügen Sie zur Aktualisierung Ihrer aktuellen Installation der AWS CLI Ihre vorhandenen Symlink- und Installationsinformationen hinzu, um den install -Befehl mit den Parametern --bin-dir , --install-dir und --update zu konstruieren. Der folgende Befehlsblock verwendet den Beispiel-Symlink /usr/local/bin und den Beispielspeicherort /usr/local/aws-cli für das Installationsprogramm. $ curl " https://awscli.amazonaws.com/ awscli-exe-linux-aarch64.zip" -o "awscliv2.zip" unzip awscliv2.zip sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update Geführte Installationsschritte Laden Sie die Installationsdatei auf eine der folgenden Arten herunter: Mit dem Befehl curl – Die -o -Option gibt den Namen der Datei an, in die das heruntergeladene Paket geschrieben wird. Aufgrund der Optionen im folgenden Beispielbefehl wird die heruntergeladene Datei in das aktuelle Verzeichnis mit dem lokalen Namen awscliv2.zip geschrieben. $ curl " https://awscli.amazonaws.com/ awscli-exe-linux-aarch64.zip" -o "awscliv2.zip" Download über die URL – Verwenden Sie eine der folgenden URLs, um das Installationsprogramm mit Ihrem Browser herunterzuladen: https://awscli.amazonaws.com/awscli-exe-linux-aarch64.zip (Optional) Überprüfen der Integrität Ihrer heruntergeladenen Zip-Datei Wenn Sie den manuellen Download des AWS CLI-Installer-Pakets .zip in den obigen Schritten ausgewählt haben, können Sie die folgenden Schritte verwenden, um die Signaturen mithilfe des GnuPG -Tools zu überprüfen. Die .zip -Dateien des AWS CLI-Installationsprogrammpakets sind kryptografisch mit PGP-Signaturen signiert. Wenn die Dateien beschädigt oder verändert wurden, schlägt diese Verifizierung fehl und Sie sollten nicht mit der Installation fortfahren. Laden Sie den gpg -Befehl herunter und installieren Sie diesen mit Ihrem Paket-Manager. Weitere Informationen zu GnuPG finden Sie auf der GnuPG-Website . Um die öffentliche Schlüsseldatei zu erstellen, müssen Sie eine Textdatei erstellen und den folgenden Text einfügen. -----BEGIN PGP PUBLIC KEY BLOCK----- mQINBF2Cr7UBEADJZHcgusOJl7ENSyumXh85z0TRV0xJorM2B/JL0kHOyigQluUG ZMLhENaG0bYatdrKP+3H91lvK050pXwnO/R7fB/FSTouki4ciIx5OuLlnJZIxSzx PqGl0mkxImLNbGWoi6Lto0LYxqHN2iQtzlwTVmq9733zd3XfcXrZ3+LblHAgEt5G TfNxEKJ8soPLyWmwDH6HWCnjZ/aIQRBTIQ05uVeEoYxSh6wOai7ss/KveoSNBbYz gbdzoqI2Y8cgH2nbfgp3DSasaLZEdCSsIsK1u05CinE7k2qZ7KgKAUIcT/cR/grk C6VwsnDU0OUCideXcQ8WeHutqvgZH1JgKDbznoIzeQHJD238GEu+eKhRHcz8/jeG 94zkcgJOz3KbZGYMiTh277Fvj9zzvZsbMBCedV1BTg3TqgvdX4bdkhf5cH+7NtWO lrFj6UwAsGukBTAOxC0l/dnSmZhJ7Z1KmEWilro/gOrjtOxqRQutlIqG22TaqoPG fYVN+en3Zwbt97kcgZDwqbuykNt64oZWc4XKCa3mprEGC3IbJTBFqglXmZ7l9ywG EEUJYOlb2XrSuPWml39beWdKM8kzr1OjnlOm6+lpTRCBfo0wa9F8YZRhHPAkwKkX XDeOGpWRj4ohOx0d2GWkyV5xyN14p2tQOCdOODmz80yUTgRpPVQUtOEhXQARAQAB tCFBV1MgQ0xJIFRlYW0gPGF3cy1jbGlAYW1hem9uLmNvbT6JAlQEEwEIAD4CGwMF CwkIBwIGFQoJCAsCBBYCAwECHgECF4AWIQT7Xbd/1cEYuAURraimMQrMRnJHXAUC aGveYQUJDMpiLAAKCRCmMQrMRnJHXKBYD/9Ab0qQdGiO5hObchG8xh8Rpb4Mjyf6 0JrVo6m8GNjNj6BHkSc8fuTQJ/FaEhaQxj3pjZ3GXPrXjIIVChmICLlFuRXYzrXc Pw0lniybypsZEVai5kO0tCNBCCFuMN9RsmmRG8mf7lC4FSTbUDmxG/QlYK+0IV/l uJkzxWa+rySkdpm0JdqumjegNRgObdXHAQDWlubWQHWyZyIQ2B4U7AxqSpcdJp6I S4Zds4wVLd1WE5pquYQ8vS2cNlDm4QNg8wTj58e3lKN47hXHMIb6CHxRnb947oJa pg189LLPR5koh+EorNkA1wu5mAJtJvy5YMsppy2y/kIjp3lyY6AmPT1posgGk70Z CmToEZ5rbd7ARExtlh76A0cabMDFlEHDIK8RNUOSRr7L64+KxOUegKBfQHb9dADY qqiKqpCbKgvtWlds909Ms74JBgr2KwZCSY1HaOxnIr4CY43QRqAq5YHOay/mU+6w hhmdF18vpyK0vfkvvGresWtSXbag7Hkt3XjaEw76BzxQH21EBDqU8WJVjHgU6ru+ DJTs+SxgJbaT3hb/vyjlw0lK+hFfhWKRwgOXH8vqducF95NRSUxtS4fpqxWVaw3Q V2OWSjbne99A5EPEySzryFTKbMGwaTlAwMCwYevt4YT6eb7NmFhTx0Fis4TalUs+ j+c7Kg92pDx2uQ== =OBAt -----END PGP PUBLIC KEY BLOCK----- Als Referenz finden Sie im Folgenden die Details des öffentlichen Schlüssels. Key ID: A6310ACC4672475C Type: RSA Size: 4096/4096 Created: 2019-09-18 Expires: 2026-07-07 User ID: AWS CLI Team <aws-cli@amazon.com> Key fingerprint: FB5D B77F D5C1 18B8 0511 ADA8 A631 0ACC 4672 475C Importieren Sie den öffentlichen AWS CLI-Schlüssel mit dem folgenden Befehl, indem Sie public-key-file-name durch den Dateinamen des erstellten öffentlichen Schlüssels ersetzen. $ gpg --import public-key-file-name gpg: /home/ username /.gnupg/trustdb.gpg: trustdb created gpg: key A6310ACC4672475C: public key "AWS CLI Team <aws-cli@amazon.com>" imported gpg: Total number processed: 1 gpg: imported: 1 Laden Sie die AWS CLI-Signaturdatei für das heruntergeladene Paket herunter. Sie hat denselben Pfad und denselben Namen wie die .zip -Datei, der sie entspricht, hat aber die Erweiterung .sig . In den folgenden Beispielen speichern wir sie im aktuellen Verzeichnis als Datei namens awscliv2.sig . Installieren Sie die neueste Version des AWS CLI mit dem folgenden Befehlsblock. $ curl -o awscliv2.sig https://awscli.amazonaws.com/ awscli-exe-linux-aarch64.zip.sig Überprüfen Sie die Signatur und übergeben Sie sowohl den heruntergeladenen .sig - als auch den .zip -Dateinamen als Parameter an den gpg -Befehl. $ gpg --verify awscliv2.sig awscliv2.zip Die Ausgabe sollte in etwa folgendermaßen aussehen: gpg: Signature made Mon Nov 4 19:00:01 2019 PST gpg: using RSA key FB5D B77F D5C1 18B8 0511 ADA8 A631 0ACC 4672 475C gpg: Good signature from "AWS CLI Team <aws-cli@amazon.com>" [unknown] gpg: WARNING: This key is not certified with a trusted signature! gpg: There is no indication that the signature belongs to the owner. Primary key fingerprint: FB5D B77F D5C1 18B8 0511 ADA8 A631 0ACC 4672 475C Wichtig Die Warnung in der Ausgabe wird erwartet und ist kein Hinweis auf ein Problem. Sie tritt auf, weil keine Vertrauenskette zwischen Ihrem persönlichen PGP-Schlüssel (falls Sie einen haben) und dem AWS CLI-PGP-Schlüssel besteht. Weitere Informationen finden Sie unter Web of trust (Netz des Vertrauens). Entpacken Sie das Installationsprogramm. Wenn Ihre Linux-Distribution keinen integrierten unzip -Befehl aufweist, verwenden Sie ein Äquivalent, um es zu entpacken. Der folgende Beispielbefehl entpackt das Paket und erstellt ein Verzeichnis mit dem Namen aws im aktuellen Verzeichnis. $ unzip awscliv2.zip Anmerkung Bei einem Update von einer früheren Version fordert der unzip -Befehl zum Überschreiben vorhandener Dateien auf. Um diese Aufforderungen zu überspringen, beispielsweise bei der Skriptautomatisierung, verwenden Sie die Aktualisierungsmarkierung -u für unzip . Diese Markierung sorgt dafür, dass vorhandene Dateien automatisch aktualisiert und bei Bedarf neue erstellt werden. $ unzip -u awscliv2.zip Führen Sie das Installationsprogramm aus. Der Installationsbefehl verwendet eine Datei namens install im neu entpackten aws -Verzeichnis. Standardmäßig werden alle Dateien unter /usr/local/aws-cli installiert und ein symbolischer Link wird in /usr/local/bin erstellt. Der Befehl enthält sudo , um diesen Verzeichnissen Schreibberechtigungen zu erteilen. $ sudo ./aws/install Sie können ohne sudo installieren, wenn Sie Ordner angeben, für die Sie bereits über Schreibberechtigungen verfügen. Verwenden Sie die folgenden Anweisungen für den install -Befehl, um den Installationsort anzugeben: Stellen Sie sicher, dass die Pfade, die Sie zu den Parametern -i und -b angeben, keine Volume- oder Verzeichnisnamen mit Leerstellen oder Leerräumen enthalten. Wenn ein Leerzeichen vorhanden ist, schlägt die Installation fehl. --install-dir oder -i – Diese Option gibt das Verzeichnis an, in den alle Dateien kopiert werden sollen. Der Standardwert ist /usr/local/aws-cli . --bin-dir oder -b – Diese Option gibt an, dass das aws -Hauptprogramm im Installationsordner mit der Datei aws im angegebenen Pfad symbolisch verknüpft ist. Sie müssen über Schreibberechtigungen für das angegebene Verzeichnis verfügen. Wenn Sie einen Symlink zu einem Verzeichnis erstellen, das sich bereits im Pfad befindet, ist es nicht notwendig, das Installationsverzeichnis der $PATH -Variablen des Benutzers hinzuzufügen. Der Standardwert ist /usr/local/bin . $ ./aws/install -i /usr/local/aws-cli -b /usr/local/bin Anmerkung Um die aktuelle Installation von AWS CLI zu aktualisieren, fügen Sie Ihre vorhandenen Symlink- und Installationsinformationen hinzu, um den install -Befehl mit dem --update -Parameter zu konstruieren. $ sudo ./aws/install --bin-dir /usr/local/bin --install-dir /usr/local/aws-cli --update Gehen Sie wie folgt vor, um den vorhandenen Symlink und das Installationsverzeichnis zu suchen: Verwenden Sie den which -Befehl, um Ihren Symlink zu finden. Dadurch erhalten Sie den Pfad, der mit dem --bin-dir -Parameter verwendet werden soll. $ which aws /usr/local/bin /aws Verwenden Sie den ls -Befehl, um das Verzeichnis zu finden, auf das Ihr Symlink verweist. Dadurch erhalten Sie den Pfad, der mit dem --install-dir -Parameter verwendet werden soll. $ ls -l /usr/local/bin/aws lrwxrwxrwx 1 ec2-user ec2-user 49 Oct 22 09:49 /usr/local/bin/aws -> /usr/local/aws-cli /v2/current/bin/aws Bestätigen Sie die Installation mit dem folgenden Befehl. $ aws --version aws-cli/2.27.41 Python/3.11.6 Linux/5.10.205-195.807.amzn2.x86_64 Wenn der aws -Befehl nicht gefunden wird, müssen Sie möglicherweise Ihr Terminal neu starten oder die Maßnahmen zur Fehlerbehebung unter Behebung von Fehlern für den AWS CLI befolgen. Snap package Wir bieten eine offizielle AWS-unterstützte Version von AWS CLI auf snap an. Wenn Sie möchten, dass immer die neueste Version von AWS CLI auf Ihrem System installiert ist, wäre ein Snap-Paket die beste Wahl, da es automatisch aktualisiert wird. Es gibt keine integrierte Unterstützung für die Auswahl von AWS CLI-Nebenversionen, daher ist dies keine optimale Installationsmethode, wenn Ihr Team Versionen anheften muss. Wenn Sie eine bestimmte AWS CLI-Nebenversion installieren möchten, empfehlen wir Ihnen, das Befehlszeilen-Installationsprogramm zu verwenden. Wenn auf Ihrer Linux-Plattform snap noch nicht installiert ist, sollten Sie snap auf Ihrer Plattform installieren. Informationen zur Installation von snap finden Sie unter Installation des Daemons in der Snap-Dokumentation . Möglicherweise müssen Sie Ihr System neu starten, damit Ihre PATH -Variablen korrekt aktualisiert werden. Wenn Sie Probleme mit der Installation haben, folgen Sie den Schritten unter Häufig auftretende Probleme beheben in der Snap-Dokumentation . Führen Sie den folgenden Befehl aus, um zu überprüfen, ob snap korrekt installiert ist. $ snap version Führen Sie den folgenden snap install -Befehl für AWS CLI aus: $ snap install aws-cli --classic Je nach Berechtigungen müssen Sie möglicherweise dem Befehl sudo hinzufügen. $ sudo snap install aws-cli --classic Anmerkung Für eine Ansicht des Snap-Repository für AWS CLI, einschließlich zusätzlicher snap -Anleitungen, besuchen Sie die aws-cli -Seite der Canonical Snapcraft-Website . Überprüfen Sie, ob die AWS CLI ordnungsgemäß installiert wurde. $ aws --version aws-cli/2.27.41 Python/3.11.6 Linux/5.10.205-195.807.amzn2.x86_64 Wenn Sie eine Fehlermeldung erhalten, finden Sie weitere Informationen unter Behebung von Fehlern für den AWS CLI . Voraussetzungen für die Installation und Aktualisierung Wir unterstützen die AWS CLI in den macOS-Versionen 11 und höher. Weitere Informationen finden Sie unter Aktualisierungen der macOS-Supportrichtlinien für AWS CLI Version 2 im AWSEntwicklertools-Blog . Da AWS keine Repositorys von Drittanbietern verwaltet, können wir nicht garantieren, dass sie die neueste Version von AWS CLI enthalten. Support-Matrix für macOS-Versionen AWS CLI-Version Unterstützte macOS-Versionen 2.21.0 bis aktuell 11 2.17.0 bis 2.20.0 10.15 2.0.0 bis 2.16.12 10.14 und niedriger AWS CLI installieren oder aktualisieren Wenn Sie auf die neueste Version aktualisieren, verwenden Sie dieselbe Installationsmethode, die Sie bei der aktuellen Version verwendet haben. Sie können die AWS CLI auf folgende Arten installieren. GUI installer Die folgenden Schritte zeigen, wie Sie mithilfe der standardmäßigen macOS-Benutzeroberfläche und Ihres Browsers die neueste Version der AWS CLI installieren. Laden Sie in Ihrem Browser die macOS- pkg -Datei https://awscli.amazonaws.com/AWSCLIV2.pkg herunter. Führen Sie die heruntergeladene Datei aus und folgen Sie den Anweisungen auf dem Bildschirm. Sie können die AWS CLI folgendermaßen installieren: Für alle Benutzer auf dem Computer (erfordert sudo ) Sie können in einem beliebigen Ordner installieren oder den empfohlenen Standardordner /usr/local/aws-cli auswählen. Das Installationsprogramm erstellt automatisch einen Symlink unter /usr/local/bin/aws , der mit dem Hauptprogramm in dem von Ihnen gewählten Installationsordner verknüpft ist. Nur für den aktuellen Benutzer (erfordert nicht sudo ) Sie können die Installation in jedem beliebigen Ordner vornehmen, für den Sie Schreibberechtigung haben. Aufgrund der standardmäßigen Benutzerberechtigungen müssen Sie nach Abschluss des Installationsprogramms manuell eine Symlink-Datei in Ihrem $PATH erstellen, die auf die Programme aws und aws_completer verweist, indem Sie die folgenden Befehle bei der Eingabeaufforderung verwenden. Der Standardspeicherort für einen Symlink ist /usr/local/bin/ : $ ln -s / folder/installed /aws-cli/aws / usr/local/bin /aws $ ln -s / folder/installed /aws-cli/aws_completer / usr/local/bin /aws_completer Wenn Sie keine Schreibberechtigungen für den Ordner haben, müssen Sie möglicherweise sudo in Ihrem Befehl verwenden. Im folgenden Beispiel wird sudo mit dem Standardspeicherort für einen Symlink in /usr/local/bin/ verwendet: $ sudo ln -s / folder/installed /aws-cli/aws / usr/local/bin /aws $ sudo ln -s / folder/installed /aws-cli/aws_completer / usr/local/bin /aws_completer Anmerkung Sie können Debug-Protokolle für die Installation anzeigen, indem Sie STRG+L an einer beliebigen Stelle im Installationsprogramm drücken. Dadurch wird ein Protokollbereich geöffnet, in dem Sie das Protokoll filtern und speichern können. Die Protokolldatei wird ebenfalls automatisch in /var/log/install.log gespeichert. Verwenden Sie die folgenden Befehle, um zu überprüfen, ob die Shell den aws -Befehl in Ihrem $PATH finden und ausführen kann. $ which aws /usr/local/bin/aws $ aws --version aws-cli/2.27.41 Python/3.11.6 Darwin/23.3.0 Wenn der aws -Befehl nicht gefunden wird, müssen Sie möglicherweise Ihr Terminal neu starten oder die Maßnahmen zur Fehlerbehebung unter Behebung von Fehlern für den AWS CLI befolgen. Command line installer - All users Wenn Sie über sudo -Berechtigungen verfügen, können Sie die AWS CLI für alle Benutzer auf dem Computer installieren. Wir stellen die Schritte in einer Gruppe bereit, die einfach zu kopieren und einzufügen ist. Lesen Sie die Beschreibungen der einzelnen Zeilen in den folgenden Schritten. $ curl " https://awscli.amazonaws.com/ AWSCLIV2.pkg" -o "AWSCLIV2.pkg" $ sudo installer -pkg AWSCLIV2.pkg -target / Geführte Installationsanleitungen Laden Sie die Datei mit dem curl -Befehl herunter. Die -o -Option gibt den Dateinamen an, in den das heruntergeladene Paket geschrieben wird. Im vorherigen Beispiel wird die Datei in AWSCLIV2.pkg im aktuellen Verzeichnis geschrieben. $ curl " https://awscli.amazonaws.com/ AWSCLIV2.pkg" -o "AWSCLIV2.pkg" Führen Sie das Standard-macOS- installer -Programm aus und geben Sie die heruntergeladene .pkg -Datei als Quelle an. Verwenden Sie den -pkg -Parameter, um den Namen des zu installierenden Pakets und den -target / -Parameter für das Laufwerk, auf dem das Paket installiert werden soll. Die Dateien werden in /usr/local/aws-cli installiert, und in /usr/local/bin wird automatisch ein Symlink erstellt. Sie müssen dem Befehl sudo hinzufügen, um diesen Ordnern Schreibberechtigungen zu erteilen. $ sudo installer -pkg ./AWSCLIV2.pkg -target / Nach Abschluss der Installation werden Debug-Protokolle in /var/log/install.log geschrieben. Verwenden Sie die folgenden Befehle, um zu überprüfen, ob die Shell den aws -Befehl in Ihrem $PATH finden und ausführen kann. $ which aws /usr/local/bin/aws $ aws --version aws-cli/2.27.41 Python/3.11.6 Darwin/23.3.0 Wenn der aws -Befehl nicht gefunden wird, müssen Sie möglicherweise Ihr Terminal neu starten oder die Maßnahmen zur Fehlerbehebung unter Behebung von Fehlern für den AWS CLI befolgen. Command line - Current user Um anzugeben, in welchem Ordner die AWS CLI installiert wird, müssen Sie eine XML-Datei mit einem beliebigen Dateinamen erstellen. Diese Datei ist eine XML-formatierte Datei, die ähnlich wie im folgenden Beispiel aussieht. Sie können alle Werte wie gezeigt belassen, außer dass Sie den Pfad /Users/myusername in Zeile 9 durch den Pfad zu dem Ordner ersetzen müssen, in dem die AWS CLI installiert werden soll. Der Ordner muss bereits vorhanden sein, oder der Befehl schlägt fehl. Das folgende XML-Beispiel mit dem Namen choices.xml , gibt das Installationsprogramm für die Installation der AWS CLI im Ordner /Users/myusername an, wo es einen Ordner mit dem Namen aws-cli erstellt. <?xml version="1.0" encoding="UTF-8"?> <!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd"> <plist version="1.0"> <array> <dict> <key>choiceAttribute</key> <string>customLocation</string> <key>attributeSetting</key> <string> /Users/myusername </string> <key>choiceIdentifier</key> <string>default</string> </dict> </array> </plist> Laden Sie das pkg -Installationsprogramm mit dem curl -Befehl herunter. Die -o -Option gibt den Dateinamen an, in den das heruntergeladene Paket geschrieben wird. Im vorherigen Beispiel wird die Datei in AWSCLIV2.pkg im aktuellen Verzeichnis geschrieben. $ curl " https://awscli.amazonaws.com/ AWSCLIV2.pkg" -o "AWSCLIV2.pkg" Führen Sie das Standard-macOS- installer -Programm mit den folgenden Optionen aus: Geben Sie den Namen des zu installierenden Pakets mithilfe des -pkg -Parameters an. Geben Sie eine aktuelle Benutzerinstallation an, indem Sie den -target -Parameter auf CurrentUserHomeDirectory festlegen. Geben Sie den Pfad (relativ zum aktuellen Ordner) und den Namen der XML-Datei an, die Sie im Parameter -applyChoiceChangesXML erstellt haben. Im folgenden Beispiel wird die AWS CLI im Ordner /Users/myusername/aws-cli installiert. $ installer -pkg AWSCLIV2.pkg \ -target CurrentUserHomeDirectory \ -applyChoiceChangesXML choices.xml Da Standardbenutzerberechtigungen normalerweise nicht das Schreiben in Ordner im $PATH zulassen, versucht das Installationsprogramm in diesem Modus nicht, die Symlinks zu den Programmen aws und aws_completer hinzuzufügen. Damit die AWS CLI ordnungsgemäß ausgeführt werden kann, müssen Sie die Symlinks manuell erstellen, nachdem das Installationsprogramm abgeschlossen ist. Wenn Ihr $PATH einen Ordner enthält, in den Sie schreiben können, können Sie den folgenden Befehl ohne sudo ausführen, wenn Sie diesen Ordner als Pfad des Ziels angeben. Wenn Sie keinen beschreibbaren Ordner in Ihrem $PATH haben, müssen Sie sudo für Berechtigungen verwenden, um in den angegebenen Zielordner zu schreiben. Der Standardspeicherort für einen Symlink ist /usr/local/bin/ . Ersetzen Sie folder/installed durch den Pfad für Ihre AWS CLI-Installation. $ sudo ln -s / folder/installed /aws-cli/aws / usr/local/bin /aws $ sudo ln -s / folder/installed /aws-cli/aws_completer / usr/local/bin /aws_completer Nach Abschluss der Installation werden Debug-Protokolle in /var/log/install.log geschrieben. Verwenden Sie die folgenden Befehle, um zu überprüfen, ob die Shell den aws -Befehl in Ihrem $PATH finden und ausführen kann. $ which aws /usr/local/bin/aws $ aws --version aws-cli/2.27.41 Python/3.11.6 Darwin/23.3.0 Wenn der aws -Befehl nicht gefunden wird, müssen Sie möglicherweise Ihr Terminal neu starten oder die Maßnahmen zur Fehlerbehebung unter Behebung von Fehlern für den AWS CLI befolgen. Voraussetzungen für die Installation und Aktualisierung Wir unterstützen die AWS CLI auf von Microsoft-unterstützten 64-Bit-Versionen von Windows. Administratorrechte zur Installation von Software AWS CLI installieren oder aktualisieren Um Ihre aktuelle Installation der AWS CLI unter Windows zu aktualisieren, laden Sie bei jedem Update ein neues Installationsprogramm herunter, um frühere Versionen zu überschreiben. AWS CLI wird regelmäßig aktualisiert. Um zu sehen, wann die neueste Version veröffentlicht wurde, sehen Sie sich das Änderungsprotokoll für AWS CLI Version 2 auf GitHub an. Laden Sie das AWS CLI-MSI-Installationsprogramm für Windows (64-Bit) herunter und führen Sie es aus https://awscli.amazonaws.com/AWSCLIV2.msi Alternativ können Sie den msiexec -Befehl ausführen, um das MSI-Installationsprogramm auszuführen. C:\> msiexec.exe /i https://awscli.amazonaws.com/ AWSCLIV2.msi Informationen zu verschiedenen Parametern, die mit msiexec verwendet werden können, finden Sie unter msiexec auf der Microsoft-Docs -Website. Sie können beispielsweise das Flag /qn für eine unbeaufsichtigte Installation verwenden. C:\> msiexec.exe /i https://awscli.amazonaws.com/ AWSCLIV2.msi /qn Zum Bestätigen der Installation öffnen Sie das Startmenü , suchen Sie nach cmd , um ein Eingabeaufforderungsfenster zu öffnen, und verwenden Sie an der Eingabeaufforderung den Befehl aws --version . C:\> aws --version aws-cli/2.27.41 Python/3.11.6 Windows/10 exe/AMD64 prompt/off Wenn Windows das Programm nicht findet, müssen Sie möglicherweise das Eingabeaufforderungsfenster schließen und erneut öffnen, um den Pfad zu aktualisieren, oder die Maßnahmen zur Fehlerbehebung unter Behebung von Fehlern für den AWS CLI befolgen. Beheben von Fehlern beim Installieren und Deinstallieren der AWS CLI Wenn nach der Installation oder Deinstallation der AWS CLI Fehler auftreten, finden Sie unter Behebung von Fehlern für den AWS CLI Informationen zur Fehlerbehebung. Die wichtigsten Maßnahmen zur Fehlerbehebung finden Sie unter Fehler aufgrund eines nicht gefundenen Befehls , Der Befehl „aws --version“ gibt eine andere als die installierte Version zurück und Der Befehl "aws --version" gibt nach der Deinstallation von eine Version zurück AWS CLI . Nächste Schritte Nach erfolgreicher AWS CLI-Installation können sie die heruntergeladenen Dateien des Installationsprogramms sicher löschen. Nach Abschluss der Schritte unter Voraussetzungen für die Verwendung der AWS CLI Version 2 und der Installation der AWS CLI sollten Sie einen Einrichtung der AWS CLI ausführen. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen Voraussetzungen Frühere Versionen Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/fr_fr/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html | Exemples de fonctions Lambda@Edge - Amazon CloudFront Exemples de fonctions Lambda@Edge - Amazon CloudFront Documentation Amazon CloudFront Guide du développeur Exemples généraux Génération de réponses : exemples Chaînes de requête - exemples Personnalisation de contenu à l'aide des en-têtes Pays ou Type d'appareil – exemples Sélection d'origine dynamique basée sur le contenu – exemples Mise à jour des statuts d’erreur : exemples Accès au corps de requête - exemples Les traductions sont fournies par des outils de traduction automatique. En cas de conflit entre le contenu d'une traduction et celui de la version originale en anglais, la version anglaise prévaudra. Exemples de fonctions Lambda@Edge Consultez les exemples suivants pour utiliser les fonctions Lambda avec Amazon. CloudFront Note Si vous choisissez l’environnement d’exécution Node.js 18 ou une version ultérieure pour votre fonction Lambda@Edge, un fichier index.mjs est créé automatiquement. Pour utiliser les exemples de code suivants, renommez plutôt le fichier index.mjs en index.js . Rubriques Exemples généraux Génération de réponses : exemples Chaînes de requête - exemples Personnalisation de contenu à l'aide des en-têtes Pays ou Type d'appareil – exemples Sélection d'origine dynamique basée sur le contenu – exemples Mise à jour des statuts d’erreur : exemples Accès au corps de requête - exemples Exemples généraux Les exemples suivants montrent les méthodes courantes d'utilisation de Lambda @Edge dans. CloudFront Rubriques Exemple : A/B test Exemple : remplacement d’un en-tête de réponse Exemple : A/B test Vous pouvez utiliser l'exemple suivant pour tester deux versions différentes d'une image sans créer de redirections ni modifier l'URL. Cet exemple lit les cookies dans la demande de l'utilisateur et modifie l'URL de la demande en conséquence. Si le spectateur n'envoie pas de cookie avec l'une des valeurs attendues, l'exemple assigne de manière aléatoire le spectateur à l' URLsune des valeurs. Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; if (request.uri !== '/experiment-pixel.jpg') { // do not process if this is not an A-B test request callback(null, request); return; } const cookieExperimentA = 'X-Experiment-Name=A'; const cookieExperimentB = 'X-Experiment-Name=B'; const pathExperimentA = '/experiment-group/control-pixel.jpg'; const pathExperimentB = '/experiment-group/treatment-pixel.jpg'; /* * Lambda at the Edge headers are array objects. * * Client may send multiple Cookie headers, i.e.: * > GET /viewerRes/test HTTP/1.1 * > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 * > Cookie: First=1; Second=2 * > Cookie: ClientCode=abc * > Host: example.com * * You can access the first Cookie header at headers["cookie"][0].value * and the second at headers["cookie"][1].value. * * Header values are not parsed. In the example above, * headers["cookie"][0].value is equal to "First=1; Second=2" */ let experimentUri; if (headers.cookie) { for (let i = 0; i < headers.cookie.length; i++) { if (headers.cookie[i].value.indexOf(cookieExperimentA) >= 0) { console.log('Experiment A cookie found'); experimentUri = pathExperimentA; break; } else if (headers.cookie[i].value.indexOf(cookieExperimentB) >= 0) { console.log('Experiment B cookie found'); experimentUri = pathExperimentB; break; } } } if (!experimentUri) { console.log('Experiment cookie has not been found. Throwing dice...'); if (Math.random() < 0.75) { experimentUri = pathExperimentA; } else { experimentUri = pathExperimentB; } } request.uri = experimentUri; console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python import json import random def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] if request['uri'] != '/experiment-pixel.jpg': # Not an A/B Test return request cookieExperimentA, cookieExperimentB = 'X-Experiment-Name=A', 'X-Experiment-Name=B' pathExperimentA, pathExperimentB = '/experiment-group/control-pixel.jpg', '/experiment-group/treatment-pixel.jpg' ''' Lambda at the Edge headers are array objects. Client may send multiple cookie headers. For example: > GET /viewerRes/test HTTP/1.1 > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 > Cookie: First=1; Second=2 > Cookie: ClientCode=abc > Host: example.com You can access the first Cookie header at headers["cookie"][0].value and the second at headers["cookie"][1].value. Header values are not parsed. In the example above, headers["cookie"][0].value is equal to "First=1; Second=2" ''' experimentUri = "" for cookie in headers.get('cookie', []): if cookieExperimentA in cookie['value']: print("Experiment A cookie found") experimentUri = pathExperimentA break elif cookieExperimentB in cookie['value']: print("Experiment B cookie found") experimentUri = pathExperimentB break if not experimentUri: print("Experiment cookie has not been found. Throwing dice...") if random.random() < 0.75: experimentUri = pathExperimentA else: experimentUri = pathExperimentB request['uri'] = experimentUri print(f"Request uri set to { experimentUri}") return request Exemple : remplacement d’un en-tête de réponse L'exemple suivant montre comment changer la valeur d'un en-tête de réponse en fonction de la valeur d'un autre en-tête. Node.js export const handler = async (event) => { const response = event.Records[0].cf.response; const headers = response.headers; const headerNameSrc = 'X-Amz-Meta-Last-Modified'; const headerNameDst = 'Last-Modified'; if (headers[headerNameSrc.toLowerCase()]) { headers[headerNameDst.toLowerCase()] = [ { key: headerNameDst, value: headers[headerNameSrc.toLowerCase()][0].value, }]; console.log(`Response header "$ { headerNameDst}" was set to ` + `"$ { headers[headerNameDst.toLowerCase()][0].value}"`); } return response; }; Python import json def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] headers = response['headers'] header_name_src = 'X-Amz-Meta-Last-Modified' header_name_dst = 'Last-Modified' if headers.get(header_name_src.lower()): headers[header_name_dst.lower()] = [ { 'key': header_name_dst, 'value': headers[header_name_src.lower()][0]['value'] }] print(f'Response header " { header_name_dst}" was set to ' f'" { headers[header_name_dst.lower()][0]["value"]}"') return response Génération de réponses : exemples Les exemples suivants illustrent l’utilisation de Lambda@Edge pour générer des réponses. Rubriques Exemple : traitement de contenu statique (réponse générée) Exemple : génération d’une redirection HTTP (réponse générée) Exemple : traitement de contenu statique (réponse générée) L'exemple suivant montre comment utiliser une fonction Lambda pour traiter le contenu statique d'un site web, ce qui réduit la charge sur le serveur d'origine et réduit la latence globale. Note Vous pouvez générer des réponses HTTP pour les événements de requête utilisateur ou de requête à l'origine. Pour de plus amples informations, veuillez consulter Génération de réponses HTTP dans les déclencheurs de demande . Vous pouvez également remplacer ou supprimer le corps de la réponse HTTP dans les événements de réponse de l’origine. Pour de plus amples informations, veuillez consulter Mise à jour des réponses HTTP dans des déclencheurs de réponse de l’origine . Node.js 'use strict'; const content = ` <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> `; exports.handler = (event, context, callback) => { /* * Generate HTTP OK response using 200 status code with HTML body. */ const response = { status: '200', statusDescription: 'OK', headers: { 'cache-control': [ { key: 'Cache-Control', value: 'max-age=100' }], 'content-type': [ { key: 'Content-Type', value: 'text/html' }] }, body: content, }; callback(null, response); }; Python import json CONTENT = """ <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> """ def lambda_handler(event, context): # Generate HTTP OK response using 200 status code with HTML body. response = { 'status': '200', 'statusDescription': 'OK', 'headers': { 'cache-control': [ { 'key': 'Cache-Control', 'value': 'max-age=100' } ], "content-type": [ { 'key': 'Content-Type', 'value': 'text/html' } ] }, 'body': CONTENT } return response Exemple : génération d’une redirection HTTP (réponse générée) L'exemple suivant montre comment générer une redirection HTTP. Note Vous pouvez générer des réponses HTTP pour les événements de requête utilisateur ou de requête à l'origine. Pour de plus amples informations, veuillez consulter Génération de réponses HTTP dans les déclencheurs de demande . Node.js 'use strict'; exports.handler = (event, context, callback) => { /* * Generate HTTP redirect response with 302 status code and Location header. */ const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html', }], }, }; callback(null, response); }; Python def lambda_handler(event, context): # Generate HTTP redirect response with 302 status code and Location header. response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html' }] } } return response Chaînes de requête - exemples Les exemples suivants montrent comment utiliser Lambda@Edge avec des chaînes de requête. Rubriques Exemple : ajout d’un en-tête basé sur un paramètre de chaîne de requête Exemple : normalisation des paramètres de chaîne de requête pour améliorer le taux d’accès au cache Exemple : redirection des utilisateurs non authentifiés vers une page de connexion Exemple : ajout d’un en-tête basé sur un paramètre de chaîne de requête L'exemple suivant montre comment obtenir la paire clé-valeur d'un paramètre de chaîne de requête, puis ajouter un en-tête en fonction de ces valeurs. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When a request contains a query string key-value pair but the origin server * expects the value in a header, you can use this Lambda function to * convert the key-value pair to a header. Here's what the function does: * 1. Parses the query string and gets the key-value pair. * 2. Adds a header to the request using the key-value pair that the function got in step 1. */ /* Parse request querystring to get javascript object */ const params = querystring.parse(request.querystring); /* Move auth param from querystring to headers */ const headerName = 'Auth-Header'; request.headers[headerName.toLowerCase()] = [ { key: headerName, value: params.auth }]; delete params.auth; /* Update request querystring */ request.querystring = querystring.stringify(params); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When a request contains a query string key-value pair but the origin server expects the value in a header, you can use this Lambda function to convert the key-value pair to a header. Here's what the function does: 1. Parses the query string and gets the key-value pair. 2. Adds a header to the request using the key-value pair that the function got in step 1. ''' # Parse request querystring to get dictionary/json params = { k : v[0] for k, v in parse_qs(request['querystring']).items()} # Move auth param from querystring to headers headerName = 'Auth-Header' request['headers'][headerName.lower()] = [ { 'key': headerName, 'value': params['auth']}] del params['auth'] # Update request querystring request['querystring'] = urlencode(params) return request Exemple : normalisation des paramètres de chaîne de requête pour améliorer le taux d’accès au cache L'exemple suivant montre comment améliorer le taux de réussite de votre cache en apportant les modifications suivantes aux chaînes de requête avant de CloudFront transférer les demandes à votre origine : Classez par ordre alphabétique les paires clé-valeur selon le nom du paramètre. Modifiez la casse des paires clé-valeur en minuscules. Pour de plus amples informations, veuillez consulter Mise en cache de contenu basée sur les paramètres de chaîne de requête . Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When you configure a distribution to forward query strings to the origin and * to cache based on an allowlist of query string parameters, we recommend * the following to improve the cache-hit ratio: * - Always list parameters in the same order. * - Use the same case for parameter names and values. * * This function normalizes query strings so that parameter names and values * are lowercase and parameter names are in alphabetical order. * * For more information, see: * https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html */ console.log('Query String: ', request.querystring); /* Parse request query string to get javascript object */ const params = querystring.parse(request.querystring.toLowerCase()); const sortedParams = { }; /* Sort param keys */ Object.keys(params).sort().forEach(key => { sortedParams[key] = params[key]; }); /* Update request querystring with normalized */ request.querystring = querystring.stringify(sortedParams); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When you configure a distribution to forward query strings to the origin and to cache based on an allowlist of query string parameters, we recommend the following to improve the cache-hit ratio: Always list parameters in the same order. - Use the same case for parameter names and values. This function normalizes query strings so that parameter names and values are lowercase and parameter names are in alphabetical order. For more information, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html ''' print("Query string: ", request["querystring"]) # Parse request query string to get js object params = { k : v[0] for k, v in parse_qs(request['querystring'].lower()).items()} # Sort param keys sortedParams = sorted(params.items(), key=lambda x: x[0]) # Update request querystring with normalized request['querystring'] = urlencode(sortedParams) return request Exemple : redirection des utilisateurs non authentifiés vers une page de connexion L'exemple suivant montre comment rediriger des utilisateurs vers une page de connexion s'ils n'ont pas saisi leurs informations d'identification. Node.js 'use strict'; function parseCookies(headers) { const parsedCookie = { }; if (headers.cookie) { headers.cookie[0].value.split(';').forEach((cookie) => { if (cookie) { const parts = cookie.split('='); parsedCookie[parts[0].trim()] = parts[1].trim(); } }); } return parsedCookie; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* Check for session-id in request cookie in viewer-request event, * if session-id is absent, redirect the user to sign in page with original * request sent as redirect_url in query params. */ /* Check for session-id in cookie, if present then proceed with request */ const parsedCookies = parseCookies(headers); if (parsedCookies && parsedCookies['session-id']) { callback(null, request); return; } /* URI encode the original request to be sent as redirect_url in query params */ const encodedRedirectUrl = encodeURIComponent(`https://$ { headers.host[0].value}$ { request.uri}?$ { request.querystring}`); const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: `https://www.example.com/signin?redirect_url=$ { encodedRedirectUrl}`, }], }, }; callback(null, response); }; Python import urllib def parseCookies(headers): parsedCookie = { } if headers.get('cookie'): for cookie in headers['cookie'][0]['value'].split(';'): if cookie: parts = cookie.split('=') parsedCookie[parts[0].strip()] = parts[1].strip() return parsedCookie def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Check for session-id in request cookie in viewer-request event, if session-id is absent, redirect the user to sign in page with original request sent as redirect_url in query params. ''' # Check for session-id in cookie, if present, then proceed with request parsedCookies = parseCookies(headers) if parsedCookies and parsedCookies['session-id']: return request # URI encode the original request to be sent as redirect_url in query params redirectUrl = "https://%s%s?%s" % (headers['host'][0]['value'], request['uri'], request['querystring']) encodedRedirectUrl = urllib.parse.quote_plus(redirectUrl.encode('utf-8')) response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://www.example.com/signin?redirect_url=%s' % encodedRedirectUrl }] } } return response Personnalisation de contenu à l'aide des en-têtes Pays ou Type d'appareil – exemples Les exemples suivants illustrent une méthode d’utilisation de Lambda@Edge pour personnaliser le comportement en fonction de l’emplacement ou du type d’appareil utilisé par l’utilisateur. Rubriques Exemple : redirection de demandes utilisateur vers une URL spécifique à un pays Exemple : gestion de différentes versions d’un objet en fonction de l’appareil Exemple : redirection de demandes utilisateur vers une URL spécifique à un pays L'exemple suivant montre comment générer une réponse de redirection HTTP avec une URL propre à un pays et renvoyer la réponse à l'utilisateur. Ceci s'avère utile lorsque vous souhaitez fournir des réponses propres à un pays. Exemples : Si vous avez des sous-domaines propres à un pays, comme us.example.com et tw.example.com, vous pouvez générer une réponse de redirection lorsqu'un utilisateur demande example.com. Si vous diffusez une vidéo, mais que vous ne disposez pas de droits pour diffuser le contenu dans un pays spécifique, vous pouvez rediriger les utilisateurs de ce pays vers une page qui explique pourquoi ils ne peuvent regarder la vidéo. Remarques : Vous devez configurer votre distribution pour être mise en cache en fonction de l'en-tête CloudFront-Viewer-Country . Pour de plus amples informations, veuillez consulter Mise en cache basée sur des en-têtes de demande sélectionnés . CloudFront ajoute l' CloudFront-Viewer-Country en-tête après l'événement de demande du spectateur. Pour utiliser cet exemple, vous devez créer un déclencheur pour l’événement de demande à l’origine. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Based on the value of the CloudFront-Viewer-Country header, generate an * HTTP status code 302 (Redirect) response, and return a country-specific * URL in the Location header. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ let url = 'https://example.com/'; if (headers['cloudfront-viewer-country']) { const countryCode = headers['cloudfront-viewer-country'][0].value; if (countryCode === 'TW') { url = 'https://tw.example.com/'; } else if (countryCode === 'US') { url = 'https://us.example.com/'; } } const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: url, }], }, }; callback(null, response); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Based on the value of the CloudFront-Viewer-Country header, generate an HTTP status code 302 (Redirect) response, and return a country-specific URL in the Location header. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' url = 'https://example.com/' viewerCountry = headers.get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'TW': url = 'https://tw.example.com/' elif countryCode == 'US': url = 'https://us.example.com/' response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': url }] } } return response Exemple : gestion de différentes versions d’un objet en fonction de l’appareil L'exemple suivant montre comment servir différentes versions d'un objet en fonction du type d'appareil employé par l'utilisateur, par exemple, un appareil mobile ou une tablette. Remarques : Vous devez configurer votre distribution pour être mise en cache en fonction des en-têtes CloudFront-Is-*-Viewer . Pour de plus amples informations, veuillez consulter Mise en cache basée sur des en-têtes de demande sélectionnés . CloudFront ajoute les CloudFront-Is-*-Viewer en-têtes après l'événement de demande du spectateur. Pour utiliser cet exemple, vous devez créer un déclencheur pour l’événement de demande à l’origine. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Serve different versions of an object based on the device type. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Is-*-Viewer headers. For more information, see * the following documentation: * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type * 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const desktopPath = '/desktop'; const mobilePath = '/mobile'; const tabletPath = '/tablet'; const smarttvPath = '/smarttv'; if (headers['cloudfront-is-desktop-viewer'] && headers['cloudfront-is-desktop-viewer'][0].value === 'true') { request.uri = desktopPath + request.uri; } else if (headers['cloudfront-is-mobile-viewer'] && headers['cloudfront-is-mobile-viewer'][0].value === 'true') { request.uri = mobilePath + request.uri; } else if (headers['cloudfront-is-tablet-viewer'] && headers['cloudfront-is-tablet-viewer'][0].value === 'true') { request.uri = tabletPath + request.uri; } else if (headers['cloudfront-is-smarttv-viewer'] && headers['cloudfront-is-smarttv-viewer'][0].value === 'true') { request.uri = smarttvPath + request.uri; } console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Serve different versions of an object based on the device type. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Is-*-Viewer headers. For more information, see the following documentation: https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' desktopPath = '/desktop'; mobilePath = '/mobile'; tabletPath = '/tablet'; smarttvPath = '/smarttv'; if 'cloudfront-is-desktop-viewer' in headers and headers['cloudfront-is-desktop-viewer'][0]['value'] == 'true': request['uri'] = desktopPath + request['uri'] elif 'cloudfront-is-mobile-viewer' in headers and headers['cloudfront-is-mobile-viewer'][0]['value'] == 'true': request['uri'] = mobilePath + request['uri'] elif 'cloudfront-is-tablet-viewer' in headers and headers['cloudfront-is-tablet-viewer'][0]['value'] == 'true': request['uri'] = tabletPath + request['uri'] elif 'cloudfront-is-smarttv-viewer' in headers and headers['cloudfront-is-smarttv-viewer'][0]['value'] == 'true': request['uri'] = smarttvPath + request['uri'] print("Request uri set to %s" % request['uri']) return request Sélection d'origine dynamique basée sur le contenu – exemples Les exemples suivants illustrent une méthode d’utilisation de Lambda@Edge pour acheminer vers différentes origines en fonction des informations contenues dans la demande. Rubriques Exemple : utilisation d’un déclencheur de demande à l’origine pour passer d’une origine personnalisée à une origine Amazon S3 Exemple : utilisation d’un déclencheur de demande à l’origine pour modifier la région de l’origine Amazon S3 Exemple : utilisation d’un déclencheur de demande de l’origine pour passer d’une origine Amazon S3 à une origine personnalisée Exemple : utilisation d’un déclencheur de demande de l’origine pour transférer progressivement le trafic d’un compartiment Amazon S3 à un autre Exemple : utilisation d’un déclencheur de demande de l’origine pour modifier le nom de domaine de l’origine en fonction de l’en-tête du pays Exemple : utilisation d’un déclencheur de demande à l’origine pour passer d’une origine personnalisée à une origine Amazon S3 Cette fonction explique comment un déclencheur de demande à l'origine peut être utilisé pour passer d'une origine personnalisée à une origine Amazon S3 à partir de laquelle le contenu est récupéré en fonction des propriétés de la demande. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if S3 origin should be used, and * if true, sets S3 origin properties. */ const params = querystring.parse(request.querystring); if (params['useS3Origin']) { if (params['useS3Origin'] === 'true') { const s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com'; /* Set S3 origin fields */ request.origin = { s3: { domainName: s3DomainName, region: '', authMethod: 'origin-access-identity', path: '', customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: s3DomainName}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' Reads query string to check if S3 origin should be used, and if true, sets S3 origin properties ''' params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useS3Origin') == 'true': s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com' # Set S3 origin fields request['origin'] = { 's3': { 'domainName': s3DomainName, 'region': '', 'authMethod': 'origin-access-identity', 'path': '', 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': s3DomainName}] return request Exemple : utilisation d’un déclencheur de demande à l’origine pour modifier la région de l’origine Amazon S3 Cette fonction explique comment un déclencheur de demande à l'origine peut être utilisé pour modifier l'origine Amazon S3 à partir de laquelle le contenu est récupéré en fonction des propriétés de la demande. Dans cet exemple, nous utilisons la valeur de l'en-tête CloudFront-Viewer-Country pour mettre à jour le nom de domaine de compartiment S3 en spécifiant un compartiment dans une région plus proche de l'utilisateur. Cela peut être utile à plusieurs égards : Cela réduit la latence lorsque la région spécifiée est plus proche du pays de l'utilisateur. Il est possible de contrôler les données en s'assurant qu'elles sont distribuées depuis une origine qui se trouve dans le pays de provenance de la demande. Pour utiliser cet exemple, vous devez procéder comme suit : Vous devez configurer votre distribution à mettre en cache en fonction de l'en-tête CloudFront-Viewer-Country . Pour de plus amples informations, veuillez consulter Mise en cache basée sur des en-têtes de demande sélectionnés . Créez un déclencheur pour cette fonction dans l'événement de demande d'origine. CloudFrontajoute l' CloudFront-Viewer-Country en-tête après l'événement de demande du visualiseur. Pour utiliser cet exemple, vous devez vous assurer que la fonction s'exécute pour une demande d'origine. Note L’exemple de code suivant utilise la même identité d’accès d’origine (OAI) pour tous les compartiments S3 que vous utilisez comme origine. Pour plus d’informations, consultez Identité d’accès d’origine . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * This blueprint demonstrates how an origin-request trigger can be used to * change the origin from which the content is fetched, based on request properties. * In this example, we use the value of the CloudFront-Viewer-Country header * to update the S3 bucket domain name to a bucket in a Region that is closer to * the viewer. * * This can be useful in several ways: * 1) Reduces latencies when the Region specified is nearer to the viewer's * country. * 2) Provides data sovereignty by making sure that data is served from an * origin that's in the same country that the request came from. * * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' }; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; const region = countryToRegion[countryCode]; /** * If the viewer's country is not in the list you specify, the request * goes to the default S3 bucket you've configured. */ if (region) { /** * If you've set up OAI, the bucket policy in the destination bucket * should allow the OAI GetObject operation, as configured by default * for an S3 origin with OAI. Another requirement with OAI is to provide * the Region so it can be used for the SIGV4 signature. Otherwise, the * Region is not required. */ request.origin.s3.region = region; const domainName = `amzn-s3-demo-bucket-in-$ { region}.s3.$ { region}.amazonaws.com`; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName }]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' This blueprint demonstrates how an origin-request trigger can be used to change the origin from which the content is fetched, based on request properties. In this example, we use the value of the CloudFront-Viewer-Country header to update the S3 bucket domain name to a bucket in a Region that is closer to the viewer. This can be useful in several ways: 1) Reduces latencies when the Region specified is nearer to the viewer's country. 2) Provides data sovereignty by making sure that data is served from an origin that's in the same country that the request came from. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' } viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] region = countryToRegion.get(countryCode) # If the viewer's country in not in the list you specify, the request # goes to the default S3 bucket you've configured if region: ''' If you've set up OAI, the bucket policy in the destination bucket should allow the OAI GetObject operation, as configured by default for an S3 origin with OAI. Another requirement with OAI is to provide the Region so it can be used for the SIGV4 signature. Otherwise, the Region is not required. ''' request['origin']['s3']['region'] = region domainName = 'amzn-s3-demo-bucket-in- { 0}.s3. { 0}.amazonaws.com'.format(region) request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Exemple : utilisation d’un déclencheur de demande de l’origine pour passer d’une origine Amazon S3 à une origine personnalisée Cette fonction explique comment un déclencheur de demande à l'origine peut être utilisé pour modifier l'origine personnalisée à partir de laquelle le contenu est récupéré, en fonction des propriétés de la demande. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if custom origin should be used, and * if true, sets custom origin properties. */ const params = querystring.parse(request.querystring); if (params['useCustomOrigin']) { if (params['useCustomOrigin'] === 'true') { /* Set custom origin fields*/ request.origin = { custom: { domainName: 'www.example.com', port: 443, protocol: 'https', path: '', sslProtocols: ['TLSv1', 'TLSv1.1'], readTimeout: 5, keepaliveTimeout: 5, customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: 'www.example.com'}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] # Reads query string to check if custom origin should be used, and # if true, sets custom origin properties params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useCustomOrigin') == 'true': # Set custom origin fields request['origin'] = { 'custom': { 'domainName': 'www.example.com', 'port': 443, 'protocol': 'https', 'path': '', 'sslProtocols': ['TLSv1', 'TLSv1.1'], 'readTimeout': 5, 'keepaliveTimeout': 5, 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': 'www.example.com'}] return request Exemple : utilisation d’un déclencheur de demande de l’origine pour transférer progressivement le trafic d’un compartiment Amazon S3 à un autre Cette fonction explique comment vous pouvez transférer progressivement le trafic d’un compartiment Amazon S3 à un autre de manière contrôlée. Node.js 'use strict'; function getRandomInt(min, max) { /* Random number is inclusive of min and max*/ return Math.floor(Math.random() * (max - min + 1)) + min; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const BLUE_TRAFFIC_PERCENTAGE = 80; /** * This Lambda function demonstrates how to gradually transfer traffic from * one S3 bucket to another in a controlled way. * We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from * 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic * is re-directed to blue-bucket. If not, the default bucket that we've configured * is used. */ const randomNumber = getRandomInt(1, 100); if (randomNumber <= BLUE_TRAFFIC_PERCENTAGE) { const domainName = 'blue-bucket.s3.amazonaws.com'; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } callback(null, request); }; Python import math import random def getRandomInt(min, max): # Random number is inclusive of min and max return math.floor(random.random() * (max - min + 1)) + min def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] BLUE_TRAFFIC_PERCENTAGE = 80 ''' This Lambda function demonstrates how to gradually transfer traffic from one S3 bucket to another in a controlled way. We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic is re-directed to blue-bucket. If not, the default bucket that we've configured is used. ''' randomNumber = getRandomInt(1, 100) if randomNumber <= BLUE_TRAFFIC_PERCENTAGE: domainName = 'blue-bucket.s3.amazonaws.com' request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Exemple : utilisation d’un déclencheur de demande de l’origine pour modifier le nom de domaine de l’origine en fonction de l’en-tête du pays Cette fonction explique comment vous pouvez modifier le nom de domaine de l'origine en fonction de l'en-tête CloudFront-Viewer-Country afin que le contenu soit transmis depuis une origine plus proche du pays de l'utilisateur. La mise en œuvre de cette fonctionnalité pour votre distribution peut avoir les avantages suivants : Réduction de la latence lorsque la région spécifiée est plus proche du pays de l'utilisateur Assurance de la souveraineté des données en veillant à ce que les données soient distribuées depuis une origine qui se trouve dans le pays d'où vient la demande Notez que pour activer cette fonctionnalité, vous devez configurer votre distribution pour qu'elle soit mise en cache en fonction de l'en-tête CloudFront-Viewer-Country . Pour de plus amples informations, veuillez consulter Mise en cache basée sur des en-têtes de demande sélectionnés . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; if (countryCode === 'GB' || countryCode === 'DE' || countryCode === 'IE' ) { const domainName = 'eu.example.com'; request.origin.custom.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'GB' or countryCode == 'DE' or countryCode == 'IE': domainName = 'eu.example.com' request['origin']['custom']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Mise à jour des statuts d’erreur : exemples Les exemples suivants fournissent des conseils sur l’utilisation de Lambda@Edge pour modifier le statut d’erreur qui est renvoyé aux utilisateurs. Rubriques Exemple : utilisation d’un déclencheur de réponse de l’origine pour mettre à jour le code de statut d’erreur sur 200 Exemple : utilisation d’un déclencheur de réponse de l’origine pour mettre à jour le code de statut d’erreur sur 302 Exemple : utilisation d’un déclencheur de réponse de l’origine pour mettre à jour le code de statut d’erreur sur 200 Cette fonction explique comment vous pouvez mettre à jour le statut de la réponse sur 200 et générer un contenu de corps statique à renvoyer à l'utilisateur dans le scénario suivant : La fonction est déclenchée dans une réponse de l'origine. Le statut de la réponse du serveur d’origine est un code de statut d’erreur (4xx ou 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; /** * This function updates the response status to 200 and generates static * body content to return to the viewer in the following scenario: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { response.status = 200; response.statusDescription = 'OK'; response.body = 'Body generation example'; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] ''' This function updates the response status to 200 and generates static body content to return to the viewer in the following scenario: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: response['status'] = 200 response['statusDescription'] = 'OK' response['body'] = 'Body generation example' return response Exemple : utilisation d’un déclencheur de réponse de l’origine pour mettre à jour le code de statut d’erreur sur 302 Cette fonction explique comment vous pouvez mettre à jour le code de statut HTTP sur 302 pour rediriger le trafic vers un autre chemin (comportement de cache) qui possède une autre origine configurée. Remarques : La fonction est déclenchée dans une réponse de l'origine. Le statut de la réponse du serveur d’origine est un code de statut d’erreur (4xx ou 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; const request = event.Records[0].cf.request; /** * This function updates the HTTP status code in the response to 302, to redirect to another * path (cache behavior) that has a different origin configured. Note the following: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { const redirect_path = `/plan-b/path?$ { request.querystring}`; response.status = 302; response.statusDescription = 'Found'; /* Drop the body, as it is not required for redirects */ response.body = ''; response.headers['location'] = [ { key: 'Location', value: redirect_path }]; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] request = event['Records'][0]['cf']['request'] ''' This function updates the HTTP status code in the response to 302, to redirect to another path (cache behavior) that has a different origin configured. Note the following: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: redirect_path = '/plan-b/path?%s' % request['querystring'] response['status'] = 302 response['statusDescription'] = 'Found' # Drop the body as it is not required for redirects response['body'] = '' response['headers']['location'] = [ { 'key': 'Location', 'value': redirect_path}] return response Accès au corps de requête - exemples Les exemples suivants illustrent l’utilisation de Lambda@Edge pour utiliser des demandes POST. Note Pour utiliser ces exemples, vous devez activer l'option Inclure le corps dans l'association de fonction Lambda de la distribution. Elle n'est pas activée par défaut. Pour activer ce paramètre dans la CloudFront console, cochez la case Inclure le corps dans l'association de fonctions Lambda . Pour activer ce paramètre dans l' CloudFront API ou avec CloudFormation, définissez le IncludeBody champ sur true in LambdaFunctionAssociation . Rubriques Exemple : utilisation d’un déclencheur de demande pour lire un formulaire HTML Exemple : utilisation d’un déclencheur de demande pour modifier un formulaire HTML Exemple : utilisation d’un déclencheur de demande pour lire un formulaire HTML Cette fonction montre comment vous pouvez traiter le corps d'une requête POST générée par un formulaire HTML (formulaire web), tel que « Contactez-nous ». Par exemple, vous pouvez avoir un formulaire HTML comme le suivant : <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> Pour l'exemple de fonction qui suit, la fonction doit être déclenchée dans une requête d'utilisateur CloudFront ou une requête d'origine. Node.js 'use strict'; const querystring = require('querystring'); /** * This function demonstrates how you can read the body of a POST request * generated by an HTML form (web form). The function is triggered in a * CloudFront viewer request or origin request event type. */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.method === 'POST') { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send the data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we only log the form fields here. * You can put your custom logic here. For example, you can store the * fields in a database, such as Amazon DynamoDB, and generate a response * right from your Lambda@Edge function. */ for (let param in params) { console.log(`For "$ { param}" user submitted "$ { params[param]}".\n`); } } return callback(null, request); }; Python import base64 from urllib.parse import parse_qs ''' Say there is a POST request body generated by an HTML such as: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> ''' ''' This function demonstrates how you can read the body of a POST request generated by an HTML form (web form). The function is triggered in a CloudFront viewer request or origin request event type. ''' def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': # HTTP body is always passed as base64-encoded string. Decode it body = base64.b64decode(request['body']['data']) # HTML forms send the data in query string format. Parse it params = { k: v[0] for k, v in parse_qs(body).items()} ''' For demonstration purposes, we only log the form fields here. You can put your custom logic here. For example, you can store the fields in a database, such as Amazon DynamoDB, and generate a response right from your Lambda@Edge function. ''' for key, value in params.items(): print("For %s use submitted %s" % (key, value)) return request Exemple : utilisation d’un déclencheur de demande pour modifier un formulaire HTML Cette fonction montre comment vous pouvez modifier le corps d'une requête POST générée par un formulaire HTML (formulaire web). La fonction est déclenchée dans une demande de CloudFront visualisation ou une demande d'origine. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { var request = event.Records[0].cf.request; if (request.method === 'POST') { /* Request body is being replaced. To do this, update the following /* three fields: * 1) body.action to 'replace' * 2) body.encoding to the encoding of the new data. * * Set to one of the following values: * * text - denotes that the generated body is in text format. * Lambda@Edge will propagate this as is. * base64 - denotes that the generated body is base64 encoded. * Lambda@Edge will base64 decode the data before sending * it to the origin. * 3) body.data to the new body. */ request.body.action = 'replace'; request.body.encoding = 'text'; request.body.data = getUpdatedBody(request); } callback(null, request); }; function getUpdatedBody(request) { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we're adding one more param. * * You can put your custom logic here. For example, you can t | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/ja_jp/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | CloudFront の開始方法 - Amazon CloudFront CloudFront の開始方法 - Amazon CloudFront ドキュメント Amazon CloudFront デベロッパーガイド CloudFront の開始方法 このセクションのトピックでは、Amazon CloudFront を使用してコンテンツの配信を開始する方法について説明します。 「 AWS アカウント のセットアップ 」トピックでは、AWS アカウント の作成や、管理アクセス権を持つユーザーの作成など、以下のチュートリアルの前提条件について説明します。 基本的なディストリビューションのチュートリアルでは、認証されたリクエストを Amazon S3 オリジンに送信するようにオリジンアクセスコントロール (OAC) を設定する方法を示します。 安全な静的ウェブサイトのチュートリアルでは、Amazon S3 オリジンで OAC を使用してドメイン名の安全な静的ウェブサイトを作成する方法を示します。チュートリアルでは、設定とデプロイに Amazon CloudFront (CloudFront) テンプレートを使用します。 トピック AWS アカウント のセットアップ CloudFront 標準ディストリビューションの開始方法 標準ディストリビューションの開始方法 (AWS CLI) 安全な静的ウェブサイトの開始方法 ブラウザで JavaScript が無効になっているか、使用できません。 AWS ドキュメントを使用するには、JavaScript を有効にする必要があります。手順については、使用するブラウザのヘルプページを参照してください。 ドキュメントの表記規則 AWS SDK の操作 AWS アカウント のセットアップ このページは役に立ちましたか? - はい ページが役に立ったことをお知らせいただき、ありがとうございます。 お時間がある場合は、何が良かったかお知らせください。今後の参考にさせていただきます。 このページは役に立ちましたか? - いいえ このページは修正が必要なことをお知らせいただき、ありがとうございます。ご期待に沿うことができず申し訳ありません。 お時間がある場合は、ドキュメントを改善する方法についてお知らせください。 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-general-examples | Funciones de ejemplo de Lambda@Edge - Amazon CloudFront Funciones de ejemplo de Lambda@Edge - Amazon CloudFront Documentación Amazon CloudFront Guía para desarrolladores Ejemplos generales Generación de respuestas: ejemplos Cadenas de consulta: ejemplos Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos Selección de origen dinámico basada en contenido: ejemplos Actualización de estados de error: ejemplos Acceso al cuerpo de la solicitud: ejemplos Funciones de ejemplo de Lambda@Edge Consulte los ejemplos siguientes para usar funciones de Lambda con Amazon CloudFront. nota Si elige el tiempo de ejecución Node.js 18 o una versión posterior para la función Lambda@Edge, se creará automáticamente un archivo index.mjs . Para usar los siguientes ejemplos de código, cambie el nombre del archivo index.mjs a index.js . Temas Ejemplos generales Generación de respuestas: ejemplos Cadenas de consulta: ejemplos Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos Selección de origen dinámico basada en contenido: ejemplos Actualización de estados de error: ejemplos Acceso al cuerpo de la solicitud: ejemplos Ejemplos generales Los ejemplos siguientes muestran formas habituales de usar Lambda@Edge en CloudFront. Temas Ejemplo: prueba A/B Ejemplo: Sobrescritura de un encabezado de respuesta Ejemplo: prueba A/B Puede utilizar el siguiente ejemplo para probar dos versiones diferentes de una imagen sin crear redirecciones ni cambiar la dirección URL. En este ejemplo se leen las cookies de la solicitud del lector y se modifica la URL de la solicitud en consecuencia. Si el espectador no envía una cookie con uno de los valores esperados, el ejemplo asigna aleatoriamente al espectador una de las URL. Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; if (request.uri !== '/experiment-pixel.jpg') { // do not process if this is not an A-B test request callback(null, request); return; } const cookieExperimentA = 'X-Experiment-Name=A'; const cookieExperimentB = 'X-Experiment-Name=B'; const pathExperimentA = '/experiment-group/control-pixel.jpg'; const pathExperimentB = '/experiment-group/treatment-pixel.jpg'; /* * Lambda at the Edge headers are array objects. * * Client may send multiple Cookie headers, i.e.: * > GET /viewerRes/test HTTP/1.1 * > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 * > Cookie: First=1; Second=2 * > Cookie: ClientCode=abc * > Host: example.com * * You can access the first Cookie header at headers["cookie"][0].value * and the second at headers["cookie"][1].value. * * Header values are not parsed. In the example above, * headers["cookie"][0].value is equal to "First=1; Second=2" */ let experimentUri; if (headers.cookie) { for (let i = 0; i < headers.cookie.length; i++) { if (headers.cookie[i].value.indexOf(cookieExperimentA) >= 0) { console.log('Experiment A cookie found'); experimentUri = pathExperimentA; break; } else if (headers.cookie[i].value.indexOf(cookieExperimentB) >= 0) { console.log('Experiment B cookie found'); experimentUri = pathExperimentB; break; } } } if (!experimentUri) { console.log('Experiment cookie has not been found. Throwing dice...'); if (Math.random() < 0.75) { experimentUri = pathExperimentA; } else { experimentUri = pathExperimentB; } } request.uri = experimentUri; console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python import json import random def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] if request['uri'] != '/experiment-pixel.jpg': # Not an A/B Test return request cookieExperimentA, cookieExperimentB = 'X-Experiment-Name=A', 'X-Experiment-Name=B' pathExperimentA, pathExperimentB = '/experiment-group/control-pixel.jpg', '/experiment-group/treatment-pixel.jpg' ''' Lambda at the Edge headers are array objects. Client may send multiple cookie headers. For example: > GET /viewerRes/test HTTP/1.1 > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 > Cookie: First=1; Second=2 > Cookie: ClientCode=abc > Host: example.com You can access the first Cookie header at headers["cookie"][0].value and the second at headers["cookie"][1].value. Header values are not parsed. In the example above, headers["cookie"][0].value is equal to "First=1; Second=2" ''' experimentUri = "" for cookie in headers.get('cookie', []): if cookieExperimentA in cookie['value']: print("Experiment A cookie found") experimentUri = pathExperimentA break elif cookieExperimentB in cookie['value']: print("Experiment B cookie found") experimentUri = pathExperimentB break if not experimentUri: print("Experiment cookie has not been found. Throwing dice...") if random.random() < 0.75: experimentUri = pathExperimentA else: experimentUri = pathExperimentB request['uri'] = experimentUri print(f"Request uri set to { experimentUri}") return request Ejemplo: Sobrescritura de un encabezado de respuesta En el ejemplo siguiente, se muestra cómo cambiar el valor de un encabezado de respuesta según el valor de otro encabezado. Node.js export const handler = async (event) => { const response = event.Records[0].cf.response; const headers = response.headers; const headerNameSrc = 'X-Amz-Meta-Last-Modified'; const headerNameDst = 'Last-Modified'; if (headers[headerNameSrc.toLowerCase()]) { headers[headerNameDst.toLowerCase()] = [ { key: headerNameDst, value: headers[headerNameSrc.toLowerCase()][0].value, }]; console.log(`Response header "$ { headerNameDst}" was set to ` + `"$ { headers[headerNameDst.toLowerCase()][0].value}"`); } return response; }; Python import json def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] headers = response['headers'] header_name_src = 'X-Amz-Meta-Last-Modified' header_name_dst = 'Last-Modified' if headers.get(header_name_src.lower()): headers[header_name_dst.lower()] = [ { 'key': header_name_dst, 'value': headers[header_name_src.lower()][0]['value'] }] print(f'Response header " { header_name_dst}" was set to ' f'" { headers[header_name_dst.lower()][0]["value"]}"') return response Generación de respuestas: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para generar respuestas. Temas Ejemplo: Envío de contenido estático (respuesta generada) Ejemplo: Generación de un redireccionamiento HTTP (respuesta generada) Ejemplo: Envío de contenido estático (respuesta generada) En el siguiente ejemplo se muestra cómo utilizar una función de Lambda para enviar contenido de sitio web estático, lo que reduce la carga en el servidor de origen y la latencia total. nota Puede generar respuestas HTTP para los eventos de solicitud del espectador y al origen. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . También puede sustituir o quitar el cuerpo de la respuesta HTTP en eventos de respuesta de origen. Para obtener más información, consulte Actualización de respuestas HTTP en desencadenadores de respuesta de origen . Node.js 'use strict'; const content = ` <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> `; exports.handler = (event, context, callback) => { /* * Generate HTTP OK response using 200 status code with HTML body. */ const response = { status: '200', statusDescription: 'OK', headers: { 'cache-control': [ { key: 'Cache-Control', value: 'max-age=100' }], 'content-type': [ { key: 'Content-Type', value: 'text/html' }] }, body: content, }; callback(null, response); }; Python import json CONTENT = """ <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> """ def lambda_handler(event, context): # Generate HTTP OK response using 200 status code with HTML body. response = { 'status': '200', 'statusDescription': 'OK', 'headers': { 'cache-control': [ { 'key': 'Cache-Control', 'value': 'max-age=100' } ], "content-type": [ { 'key': 'Content-Type', 'value': 'text/html' } ] }, 'body': CONTENT } return response Ejemplo: Generación de un redireccionamiento HTTP (respuesta generada) En el siguiente ejemplo se muestra cómo generar una redirección HTTP. nota Puede generar respuestas HTTP para los eventos de solicitud del espectador y al origen. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . Node.js 'use strict'; exports.handler = (event, context, callback) => { /* * Generate HTTP redirect response with 302 status code and Location header. */ const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html', }], }, }; callback(null, response); }; Python def lambda_handler(event, context): # Generate HTTP redirect response with 302 status code and Location header. response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html' }] } } return response Cadenas de consulta: ejemplos En los ejemplos siguientes se muestran formas de usar Lambda@Edge con cadenas de consulta. Temas Ejemplo: Adición de un encabezado en función de un parámetro de la cadena de consulta Ejemplo: Normalización de parámetros de cadenas de consulta para mejorar la tasa de aciertos de caché Ejemplo: Redireccionamiento de los usuarios no autenticados a una página de inicio de sesión Ejemplo: Adición de un encabezado en función de un parámetro de la cadena de consulta El siguiente ejemplo muestra cómo obtener el par clave-valor de un parámetro de la cadena de consulta y, a continuación, añadir un encabezado en función de dichos valores. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When a request contains a query string key-value pair but the origin server * expects the value in a header, you can use this Lambda function to * convert the key-value pair to a header. Here's what the function does: * 1. Parses the query string and gets the key-value pair. * 2. Adds a header to the request using the key-value pair that the function got in step 1. */ /* Parse request querystring to get javascript object */ const params = querystring.parse(request.querystring); /* Move auth param from querystring to headers */ const headerName = 'Auth-Header'; request.headers[headerName.toLowerCase()] = [ { key: headerName, value: params.auth }]; delete params.auth; /* Update request querystring */ request.querystring = querystring.stringify(params); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When a request contains a query string key-value pair but the origin server expects the value in a header, you can use this Lambda function to convert the key-value pair to a header. Here's what the function does: 1. Parses the query string and gets the key-value pair. 2. Adds a header to the request using the key-value pair that the function got in step 1. ''' # Parse request querystring to get dictionary/json params = { k : v[0] for k, v in parse_qs(request['querystring']).items()} # Move auth param from querystring to headers headerName = 'Auth-Header' request['headers'][headerName.lower()] = [ { 'key': headerName, 'value': params['auth']}] del params['auth'] # Update request querystring request['querystring'] = urlencode(params) return request Ejemplo: Normalización de parámetros de cadenas de consulta para mejorar la tasa de aciertos de caché El siguiente ejemplo muestra cómo mejorar la tasa de acceso a la caché haciendo los siguientes cambios en las cadenas de consulta antes de que CloudFront reenvíe las solicitudes a su origen: Alfabetizar los pares de clave-valor por el nombre del parámetro. Cambiar a minúsculas el modelo de mayúsculas y minúsculas de los pares de clave-valor. Para obtener más información, consulte Almacenamiento en caché de contenido en función de parámetros de cadenas de consulta . Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When you configure a distribution to forward query strings to the origin and * to cache based on an allowlist of query string parameters, we recommend * the following to improve the cache-hit ratio: * - Always list parameters in the same order. * - Use the same case for parameter names and values. * * This function normalizes query strings so that parameter names and values * are lowercase and parameter names are in alphabetical order. * * For more information, see: * https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html */ console.log('Query String: ', request.querystring); /* Parse request query string to get javascript object */ const params = querystring.parse(request.querystring.toLowerCase()); const sortedParams = { }; /* Sort param keys */ Object.keys(params).sort().forEach(key => { sortedParams[key] = params[key]; }); /* Update request querystring with normalized */ request.querystring = querystring.stringify(sortedParams); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When you configure a distribution to forward query strings to the origin and to cache based on an allowlist of query string parameters, we recommend the following to improve the cache-hit ratio: Always list parameters in the same order. - Use the same case for parameter names and values. This function normalizes query strings so that parameter names and values are lowercase and parameter names are in alphabetical order. For more information, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html ''' print("Query string: ", request["querystring"]) # Parse request query string to get js object params = { k : v[0] for k, v in parse_qs(request['querystring'].lower()).items()} # Sort param keys sortedParams = sorted(params.items(), key=lambda x: x[0]) # Update request querystring with normalized request['querystring'] = urlencode(sortedParams) return request Ejemplo: Redireccionamiento de los usuarios no autenticados a una página de inicio de sesión El siguiente ejemplo muestra cómo redirigir a los usuarios una página de inicio de sesión si no ha introducido sus credenciales. Node.js 'use strict'; function parseCookies(headers) { const parsedCookie = { }; if (headers.cookie) { headers.cookie[0].value.split(';').forEach((cookie) => { if (cookie) { const parts = cookie.split('='); parsedCookie[parts[0].trim()] = parts[1].trim(); } }); } return parsedCookie; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* Check for session-id in request cookie in viewer-request event, * if session-id is absent, redirect the user to sign in page with original * request sent as redirect_url in query params. */ /* Check for session-id in cookie, if present then proceed with request */ const parsedCookies = parseCookies(headers); if (parsedCookies && parsedCookies['session-id']) { callback(null, request); return; } /* URI encode the original request to be sent as redirect_url in query params */ const encodedRedirectUrl = encodeURIComponent(`https://$ { headers.host[0].value}$ { request.uri}?$ { request.querystring}`); const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: `https://www.example.com/signin?redirect_url=$ { encodedRedirectUrl}`, }], }, }; callback(null, response); }; Python import urllib def parseCookies(headers): parsedCookie = { } if headers.get('cookie'): for cookie in headers['cookie'][0]['value'].split(';'): if cookie: parts = cookie.split('=') parsedCookie[parts[0].strip()] = parts[1].strip() return parsedCookie def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Check for session-id in request cookie in viewer-request event, if session-id is absent, redirect the user to sign in page with original request sent as redirect_url in query params. ''' # Check for session-id in cookie, if present, then proceed with request parsedCookies = parseCookies(headers) if parsedCookies and parsedCookies['session-id']: return request # URI encode the original request to be sent as redirect_url in query params redirectUrl = "https://%s%s?%s" % (headers['host'][0]['value'], request['uri'], request['querystring']) encodedRedirectUrl = urllib.parse.quote_plus(redirectUrl.encode('utf-8')) response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://www.example.com/signin?redirect_url=%s' % encodedRedirectUrl }] } } return response Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para personalizar el comportamiento en función de la ubicación o el tipo de dispositivo que usa el lector. Temas Ejemplo: Redireccionamiento de solicitudes de espectadores a una URL específica de un país Ejemplo: Envío de distintas versiones de un objeto en función del dispositivo Ejemplo: Redireccionamiento de solicitudes de espectadores a una URL específica de un país El siguiente ejemplo muestra cómo generar una respuesta de redireccionamiento HTTP con una URL específica del país y devolver la respuesta al espectador. Esto resulta útil cuando se quiere proporcionar respuestas específicas del país. Por ejemplo: Si tiene subdominios específicos de un país, como us.ejemplo.com y tw.ejemplo.com, puede generar una respuesta de redireccionamiento cuando un espectador solicite ejemplo.com. Si está haciendo streaming de video, pero no tiene derechos para transmitir el contenido en un país determinado, puede redirigir a los usuarios de dicho país a una página en la que se explica por qué no pueden ver el video. Tenga en cuenta lo siguiente: Debe configurar la distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . CloudFront agrega el encabezado CloudFront-Viewer-Country después del evento de solicitud del lector. Para utilizar este ejemplo, debe crear un activador para el evento de solicitud al origen. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Based on the value of the CloudFront-Viewer-Country header, generate an * HTTP status code 302 (Redirect) response, and return a country-specific * URL in the Location header. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ let url = 'https://example.com/'; if (headers['cloudfront-viewer-country']) { const countryCode = headers['cloudfront-viewer-country'][0].value; if (countryCode === 'TW') { url = 'https://tw.example.com/'; } else if (countryCode === 'US') { url = 'https://us.example.com/'; } } const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: url, }], }, }; callback(null, response); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Based on the value of the CloudFront-Viewer-Country header, generate an HTTP status code 302 (Redirect) response, and return a country-specific URL in the Location header. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' url = 'https://example.com/' viewerCountry = headers.get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'TW': url = 'https://tw.example.com/' elif countryCode == 'US': url = 'https://us.example.com/' response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': url }] } } return response Ejemplo: Envío de distintas versiones de un objeto en función del dispositivo El siguiente ejemplo muestra cómo ofrecer distintas versiones de un objeto en función del tipo de dispositivo que el usuario está utilizando; por ejemplo, un dispositivo móvil o una tablet. Tenga en cuenta lo siguiente: Debe configurar la distribución para almacenar en la caché en función de los encabezados CloudFront-Is-*-Viewer . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . CloudFront agrega los encabezados CloudFront-Is-*-Viewer después del evento de solicitud del lector. Para utilizar este ejemplo, debe crear un activador para el evento de solicitud al origen. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Serve different versions of an object based on the device type. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Is-*-Viewer headers. For more information, see * the following documentation: * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type * 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const desktopPath = '/desktop'; const mobilePath = '/mobile'; const tabletPath = '/tablet'; const smarttvPath = '/smarttv'; if (headers['cloudfront-is-desktop-viewer'] && headers['cloudfront-is-desktop-viewer'][0].value === 'true') { request.uri = desktopPath + request.uri; } else if (headers['cloudfront-is-mobile-viewer'] && headers['cloudfront-is-mobile-viewer'][0].value === 'true') { request.uri = mobilePath + request.uri; } else if (headers['cloudfront-is-tablet-viewer'] && headers['cloudfront-is-tablet-viewer'][0].value === 'true') { request.uri = tabletPath + request.uri; } else if (headers['cloudfront-is-smarttv-viewer'] && headers['cloudfront-is-smarttv-viewer'][0].value === 'true') { request.uri = smarttvPath + request.uri; } console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Serve different versions of an object based on the device type. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Is-*-Viewer headers. For more information, see the following documentation: https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' desktopPath = '/desktop'; mobilePath = '/mobile'; tabletPath = '/tablet'; smarttvPath = '/smarttv'; if 'cloudfront-is-desktop-viewer' in headers and headers['cloudfront-is-desktop-viewer'][0]['value'] == 'true': request['uri'] = desktopPath + request['uri'] elif 'cloudfront-is-mobile-viewer' in headers and headers['cloudfront-is-mobile-viewer'][0]['value'] == 'true': request['uri'] = mobilePath + request['uri'] elif 'cloudfront-is-tablet-viewer' in headers and headers['cloudfront-is-tablet-viewer'][0]['value'] == 'true': request['uri'] = tabletPath + request['uri'] elif 'cloudfront-is-smarttv-viewer' in headers and headers['cloudfront-is-smarttv-viewer'][0]['value'] == 'true': request['uri'] = smarttvPath + request['uri'] print("Request uri set to %s" % request['uri']) return request Selección de origen dinámico basada en contenido: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para el direccionamiento a diferentes orígenes en función de la información de la solicitud. Temas Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen personalizado a un origen de Amazon S3 Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar la región de origen de Amazon S3 Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen de Amazon S3 a un origen personalizado Ejemplo: Uso de un desencadenador de solicitud al origen para transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar el nombre del dominio de origen en función del encabezado de país Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen personalizado a un origen de Amazon S3 Esta función demuestra cómo utilizar un desencadenador de solicitud al origen para cambiar desde un origen personalizado a un origen de Amazon S3 desde el que recuperar el contenido, en función de las propiedades de la solicitud. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if S3 origin should be used, and * if true, sets S3 origin properties. */ const params = querystring.parse(request.querystring); if (params['useS3Origin']) { if (params['useS3Origin'] === 'true') { const s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com'; /* Set S3 origin fields */ request.origin = { s3: { domainName: s3DomainName, region: '', authMethod: 'origin-access-identity', path: '', customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: s3DomainName}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' Reads query string to check if S3 origin should be used, and if true, sets S3 origin properties ''' params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useS3Origin') == 'true': s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com' # Set S3 origin fields request['origin'] = { 's3': { 'domainName': s3DomainName, 'region': '', 'authMethod': 'origin-access-identity', 'path': '', 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': s3DomainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar la región de origen de Amazon S3 Esta función demuestra cómo utilizar un desencadenador de solicitud al origen para cambiar el origen de Amazon S3 desde el que se recupera el contenido, en función de las propiedades de la solicitud. En este ejemplo, utilizamos el valor del encabezado CloudFront-Viewer-Country para actualizar el nombre de dominio del bucket de S3 por un bucket de una región que está más cerca del lector. Esto puede resultar útil de varias maneras: Reduce las latencias cuando la región especificada está más cerca del país del lector. Proporciona soberanía de los datos, al asegurarse de que los datos se distribuyen desde un origen que está en el país del que provino la solicitud. Para utilizar este ejemplo, debe hacer lo siguiente: Configure la distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . Crear un disparador para esta función en el evento de solicitud al origen. CloudFront agrega el encabezado CloudFront-Viewer-Country después del evento de solicitud del lector; por lo tanto, para utilizar este ejemplo, debe asegurarse de que la función ejecuta una solicitud de origen. nota El siguiente código de ejemplo usa la misma identidad de acceso de origen (OAI) para todos los buckets de S3 que usa para el origen. Para obtener más información, consulte Identidad de acceso de origen . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * This blueprint demonstrates how an origin-request trigger can be used to * change the origin from which the content is fetched, based on request properties. * In this example, we use the value of the CloudFront-Viewer-Country header * to update the S3 bucket domain name to a bucket in a Region that is closer to * the viewer. * * This can be useful in several ways: * 1) Reduces latencies when the Region specified is nearer to the viewer's * country. * 2) Provides data sovereignty by making sure that data is served from an * origin that's in the same country that the request came from. * * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' }; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; const region = countryToRegion[countryCode]; /** * If the viewer's country is not in the list you specify, the request * goes to the default S3 bucket you've configured. */ if (region) { /** * If you've set up OAI, the bucket policy in the destination bucket * should allow the OAI GetObject operation, as configured by default * for an S3 origin with OAI. Another requirement with OAI is to provide * the Region so it can be used for the SIGV4 signature. Otherwise, the * Region is not required. */ request.origin.s3.region = region; const domainName = `amzn-s3-demo-bucket-in-$ { region}.s3.$ { region}.amazonaws.com`; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName }]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' This blueprint demonstrates how an origin-request trigger can be used to change the origin from which the content is fetched, based on request properties. In this example, we use the value of the CloudFront-Viewer-Country header to update the S3 bucket domain name to a bucket in a Region that is closer to the viewer. This can be useful in several ways: 1) Reduces latencies when the Region specified is nearer to the viewer's country. 2) Provides data sovereignty by making sure that data is served from an origin that's in the same country that the request came from. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' } viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] region = countryToRegion.get(countryCode) # If the viewer's country in not in the list you specify, the request # goes to the default S3 bucket you've configured if region: ''' If you've set up OAI, the bucket policy in the destination bucket should allow the OAI GetObject operation, as configured by default for an S3 origin with OAI. Another requirement with OAI is to provide the Region so it can be used for the SIGV4 signature. Otherwise, the Region is not required. ''' request['origin']['s3']['region'] = region domainName = 'amzn-s3-demo-bucket-in- { 0}.s3. { 0}.amazonaws.com'.format(region) request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen de Amazon S3 a un origen personalizado Esta función demuestra cómo utilizar un disparador de solicitud al origen para cambiar el origen personalizado desde el que se recupera el contenido, en función de las propiedades de la solicitud. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if custom origin should be used, and * if true, sets custom origin properties. */ const params = querystring.parse(request.querystring); if (params['useCustomOrigin']) { if (params['useCustomOrigin'] === 'true') { /* Set custom origin fields*/ request.origin = { custom: { domainName: 'www.example.com', port: 443, protocol: 'https', path: '', sslProtocols: ['TLSv1', 'TLSv1.1'], readTimeout: 5, keepaliveTimeout: 5, customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: 'www.example.com'}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] # Reads query string to check if custom origin should be used, and # if true, sets custom origin properties params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useCustomOrigin') == 'true': # Set custom origin fields request['origin'] = { 'custom': { 'domainName': 'www.example.com', 'port': 443, 'protocol': 'https', 'path': '', 'sslProtocols': ['TLSv1', 'TLSv1.1'], 'readTimeout': 5, 'keepaliveTimeout': 5, 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': 'www.example.com'}] return request Ejemplo: Uso de un desencadenador de solicitud al origen para transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro Esta función demuestra cómo transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro de forma controlada. Node.js 'use strict'; function getRandomInt(min, max) { /* Random number is inclusive of min and max*/ return Math.floor(Math.random() * (max - min + 1)) + min; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const BLUE_TRAFFIC_PERCENTAGE = 80; /** * This Lambda function demonstrates how to gradually transfer traffic from * one S3 bucket to another in a controlled way. * We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from * 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic * is re-directed to blue-bucket. If not, the default bucket that we've configured * is used. */ const randomNumber = getRandomInt(1, 100); if (randomNumber <= BLUE_TRAFFIC_PERCENTAGE) { const domainName = 'blue-bucket.s3.amazonaws.com'; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } callback(null, request); }; Python import math import random def getRandomInt(min, max): # Random number is inclusive of min and max return math.floor(random.random() * (max - min + 1)) + min def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] BLUE_TRAFFIC_PERCENTAGE = 80 ''' This Lambda function demonstrates how to gradually transfer traffic from one S3 bucket to another in a controlled way. We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic is re-directed to blue-bucket. If not, the default bucket that we've configured is used. ''' randomNumber = getRandomInt(1, 100) if randomNumber <= BLUE_TRAFFIC_PERCENTAGE: domainName = 'blue-bucket.s3.amazonaws.com' request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar el nombre del dominio de origen en función del encabezado de país Esta función demuestra cómo cambiar el nombre del dominio de origen en función del encabezado CloudFront-Viewer-Country , de forma que el contenido se distribuya desde un origen más cercano al país del lector. La implementación de esta funcionalidad para su distribución puede tener ventajas como las siguientes: Reducir las latencias cuando la región especificada está más cerca del país del lector Proporcionar soberanía de los datos, al asegurarse de que los datos se distribuyen desde un origen que está en el país del que provino la solicitud Tenga en cuenta que para habilitar esta funcionalidad, debe configurar su distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; if (countryCode === 'GB' || countryCode === 'DE' || countryCode === 'IE' ) { const domainName = 'eu.example.com'; request.origin.custom.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'GB' or countryCode == 'DE' or countryCode == 'IE': domainName = 'eu.example.com' request['origin']['custom']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Actualización de estados de error: ejemplos En los ejemplos siguientes se proporciona orientación acerca de cómo puede usar Lambda@Edge para cambiar el estado de error que se devuelve a los usuarios. Temas Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 Esta función demuestra cómo actualizar el estado de la respuesta a 200 y generar un cuerpo con contenido estático para devolverlo al espectador en la siguiente situación: La función se desencadena en una respuesta del origen. El estado de la respuesta del servidor de origen es un código de estado de error (4xx o 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; /** * This function updates the response status to 200 and generates static * body content to return to the viewer in the following scenario: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { response.status = 200; response.statusDescription = 'OK'; response.body = 'Body generation example'; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] ''' This function updates the response status to 200 and generates static body content to return to the viewer in the following scenario: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: response['status'] = 200 response['statusDescription'] = 'OK' response['body'] = 'Body generation example' return response Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 Esta función demuestra cómo actualizar el código de estado HTTP a 302 para la redirección a otra ruta (comportamiento de la caché) en la que se ha configurado un origen diferente. Tenga en cuenta lo siguiente: La función se desencadena en una respuesta del origen. El estado de la respuesta del servidor de origen es un código de estado de error (4xx o 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; const request = event.Records[0].cf.request; /** * This function updates the HTTP status code in the response to 302, to redirect to another * path (cache behavior) that has a different origin configured. Note the following: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { const redirect_path = `/plan-b/path?$ { request.querystring}`; response.status = 302; response.statusDescription = 'Found'; /* Drop the body, as it is not required for redirects */ response.body = ''; response.headers['location'] = [ { key: 'Location', value: redirect_path }]; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] request = event['Records'][0]['cf']['request'] ''' This function updates the HTTP status code in the response to 302, to redirect to another path (cache behavior) that has a different origin configured. Note the following: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: redirect_path = '/plan-b/path?%s' % request['querystring'] response['status'] = 302 response['statusDescription'] = 'Found' # Drop the body as it is not required for redirects response['body'] = '' response['headers']['location'] = [ { 'key': 'Location', 'value': redirect_path}] return response Acceso al cuerpo de la solicitud: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para trabajar con las solicitudes POST. nota Para utilizar estos ejemplos, debe habilitar la opción incluir cuerpo en la asociación de funciones Lambda de la distribución. No está habilitada de forma predeterminada. Para habilitar esta configuración en la consola de CloudFront, seleccione la casilla de verificación Incluir cuerpo en la Asociación de funciones Lambda . Para habilitar esta configuración en la API de CloudFront o con CloudFormation, establezca el campo IncludeBody en true en LambdaFunctionAssociation . Temas Ejemplo: Uso de un desencadenador de solicitud para leer un formulario HTML Ejemplo: Uso de un desencadenador de solicitud para modificar un formulario HTML Ejemplo: Uso de un desencadenador de solicitud para leer un formulario HTML Esta función ilustra cómo puede procesar el cuerpo de una solicitud POST generada por un formulario HTML (formulario web), como por ejemplo un formulario tipo "póngase en contacto con nosotros". Por ejemplo, es posible que tenga un formulario HTML como el siguiente: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> Para la función de ejemplo que se indica a continuación, la función se debe desencadenar en una solicitud al origen o del lector de CloudFront. Node.js 'use strict'; const querystring = require('querystring'); /** * This function demonstrates how you can read the body of a POST request * generated by an HTML form (web form). The function is triggered in a * CloudFront viewer request or origin request event type. */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.method === 'POST') { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send the data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we only log the form fields here. * You can put your custom logic here. For example, you can store the * fields in a database, such as Amazon DynamoDB, and generate a response * right from your Lambda@Edge function. */ for (let param in params) { console.log(`For "$ { param}" user submitted "$ { params[param]}".\n`); } } return callback(null, request); }; Python import base64 from urllib.parse import parse_qs ''' Say there is a POST request body generated by an HTML such as: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> ''' ''' This function demonstrates how you can read the body of a POST request generated by an HTML form (web form). The function is triggered in a CloudFront viewer request or origin request event type. ''' def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': # HTTP body is always passed as base64-encoded string. Decode it body = base64.b64decode(request['body']['data']) # HTML forms send the data in query string format. Parse it params = { k: v[0] for k, v in parse_qs(body).items()} ''' For demonstration purposes, we only log the form fields here. You can put your custom logic here. For example, you can store the fields in a database, such as Amazon DynamoDB, and generate a response right from your Lambda@Edge function. ''' for key, value in params.items(): print("For %s use submitted %s" % (key, value)) return request Ejemplo: Uso de un desencadenador de solicitud para modificar un formulario HTML Esta función ilustra cómo puede modificar el cuerpo de una solicitud POST generada por un formulario HTML (formulario web). La función se activa en una solicitud al origen o del lector de CloudFront. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { var request = event.Records[0].cf.request; if (request.method === 'POST') { /* Request body is being replaced. To do this, update the following /* three fields: * 1) body.action to 'replace' * 2) body.encoding to the encoding of the new data. * * Set to one of the following values: * * text - denotes that the generated body is in text format. * Lambda@Edge will propagate this as is. * base64 - denotes that the generated body is base64 encoded. * Lambda@Edge will base64 decode the data before sending * it to the origin. * 3) body.data to the new body. */ request.body.action = 'replace'; request.body.encoding = 'text'; request.body.data = getUpdatedBody(request); } callback(null, request); }; function getUpdatedBody(request) { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we're adding one more param. * * You can put your custom logic here. For example, you can truncate long * bodies from malicious requests. */ params['new-param-name'] = 'new-param-value'; return querystring.stringify(params); } Python import base64 from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': ''' Request body is being replaced. To do this, update the following three fields: 1) body.action to 'replace' 2) body.encoding to the encoding of the new data. Set to one of the following values: text - denotes that the generated body is in text format. Lambda@Edge will propagate this as is. base64 - denotes that the generated body is base64 encoded. Lambda@Edge will base64 decode the data before sending it to the origin. 3) body.data to the new body. ''' request['body']['action'] = 'replace' request['body']['encoding'] = 'text' request['body']['data'] | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.SimpleDistribution.html | Get started with a CloudFront standard distribution - Amazon CloudFront Get started with a CloudFront standard distribution - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Prerequisites Create bucket Upload content Create distribution Access the content Clean up Enhance your basic distribution Get started with a CloudFront standard distribution The procedures in this section show you how to use CloudFront to set up a standard distribution that does the following: Creates an S3 bucket to use as your distribution origin. Stores the original versions of your objects in an Amazon Simple Storage Service (Amazon S3) bucket. Uses origin access control (OAC) to send authenticated requests to your Amazon S3 origin. OAC sends requests through CloudFront to prevent viewers from accessing your S3 bucket directly. For more information about OAC, see Restrict access to an Amazon S3 origin . Uses the CloudFront domain name in URLs for your objects (for example, https://d111111abcdef8.cloudfront.net/index.html ). Keeps your objects in CloudFront edge locations for the default duration of 24 hours (the minimum duration is 0 seconds). Most of this is configured automatically for you when you create a CloudFront distribution. Topics Prerequisites Create an Amazon S3 bucket Upload the content to the bucket Create a CloudFront distribution that uses an Amazon S3 origin with OAC Access your content through CloudFront Clean up Enhance your basic distribution Prerequisites Before you begin, make sure that you’ve completed the steps in Set up your AWS account . Create an Amazon S3 bucket An Amazon S3 bucket is a container for files (objects) or folders. CloudFront can distribute almost any type of file for you when an S3 bucket is the source. For example, CloudFront can distribute text, images, and videos. There is no maximum for the amount of data that you can store on Amazon S3. For this tutorial, you create an S3 bucket with the provided sample hello world files that you will use to create a basic webpage. To create a bucket Sign in to the AWS Management Console and open the Amazon S3 console at https://console.aws.amazon.com/s3/ . We recommend that you use our Hello World sample for this Getting started. Download the hello world webpage: hello-world-html.zip . Unzip it and save the css folder and index file in a convenient location, such as the desktop where you are running your browser. Choose Create bucket . Enter a unique Bucket name that conforms to the General purpose buckets naming rules in the Amazon Simple Storage Service User Guide . For Region , we recommend choosing an AWS Region that is geographically close to you. (This reduces latency and costs.) Choosing a different Region works, too. You might do this to address regulatory requirements, for example. Leave all other settings at their defaults, and then choose Create bucket . Upload the content to the bucket After you create your Amazon S3 bucket, upload the contents of the unzipped hello world file to it. (You downloaded and unzipped this file in Create an Amazon S3 bucket .) To upload the content to Amazon S3 In the General purpose buckets section, choose the name of your new bucket. Choose Upload . On the Upload page, drag the css folder and index file into the drop area. Leave all other settings at their defaults, and then choose Upload . Create a CloudFront distribution that uses an Amazon S3 origin with OAC For this tutorial, you will create a CloudFront distribution that uses an Amazon S3 origin with origin access control (OAC). OAC helps you securely send authenticated requests to your Amazon S3 origin. For more information about OAC, see Restrict access to an Amazon S3 origin . To create a CloudFront distribution with an Amazon S3 origin that uses OAC Open the CloudFront console at https://console.aws.amazon.com/cloudfront/v4/home . Choose Create distribution . Enter a Distribution name for the standard distribution. The name will appear as the value for the Name key as a tag. You can change this value later. You can add up to 50 tags for your standard distribution. For more information, see Tag a distribution . Choose Single website or app , Next . Choose Next . For Origin type page, select the Amazon S3 . For S3 origin , choose Browse S3 and select the S3 bucket that you created for this tutorial. For Settings , choose Use recommended origin settings . CloudFront will use the default recommended cache and origin settings for your Amazon S3 origin, including setting up Origin Access Control (OAC). For more information about the recommended settings, see Preconfigured distribution settings reference . Choose Next . On the Enable security protections page, choose whether to enable AWS WAF security protections. Choose Next . Choose Create distribution . CloudFront updates the S3 bucket policy for you. Review the Details section for your new distribution. When your distribution is done deploying, the Last modified field changes from Deploying to a date and time. Record the domain name that CloudFront assigns to your distribution. It looks similar to the following: d111111abcdef8.cloudfront.net . Before using the distribution and S3 bucket from this tutorial in a production environment, make sure to configure it to meet your specific needs. For information about configuring access in a production environment, see Configure secure access and restrict access to content . Access your content through CloudFront To access your content through CloudFront, combine the domain name for your CloudFront distribution with the main page for your content. (You recorded your distribution domain name in Create a CloudFront distribution that uses an Amazon S3 origin with OAC .) Your distribution domain name might look like this: d111111abcdef8.cloudfront.net . The path to the main page of a website is typically /index.html . Therefore, the URL to access your content through CloudFront might look like this: https://d111111abcdef8.cloudfront.net/index.html . If you followed the previous steps and used the hello world webpage, you should see a webpage that says Hello world! . When you upload more content to this S3 bucket, you can access the content through CloudFront by combining the CloudFront distribution domain name with the path to the object in the S3 bucket. For example, if you upload a new file named new-page.html to the root of your S3 bucket, the URL looks like this: https://d111111abcdef8.cloudfront.net/new-page.html . Clean up If you created your distribution and S3 bucket only as a learning exercise, delete them so that you no longer accrue charges. Delete the distribution first. For more information, see the following links: Delete a distribution Deleting a bucket Enhance your basic distribution This Get started tutorial provides a minimal framework for creating a distribution. We recommend that you explore the following enhancements: You can use the CloudFront private content feature to restrict access to the content in the Amazon S3 buckets. For more information about distributing private content, see Serve private content with signed URLs and signed cookies . You can configure your CloudFront distribution to use a custom domain name (for example, www.example.com instead of d111111abcdef8.cloudfront.net ). For more information, see Use custom URLs . This tutorial uses an Amazon S3 origin with origin access control (OAC). However, you can't use OAC if your origin is an S3 bucket configured as a website endpoint . If that's the case, you must set up your bucket with CloudFront as a custom origin. For more information, see Use an Amazon S3 bucket that's configured as a website endpoint . For more information about OAC, see Restrict access to an Amazon S3 origin . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Set up your AWS account Get started (AWS CLI) Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/pt_br/lambda/latest/dg/with-s3-example.html | Tutorial: Usar um acionador do Amazon S3 para invocar uma função do Lambda - AWS Lambda Tutorial: Usar um acionador do Amazon S3 para invocar uma função do Lambda - AWS Lambda Documentação AWS Lambda Guia do Desenvolvedor Criar um bucket do Amazon S3 Carregar um objeto de teste em um bucket Criação de uma política de permissões Criar uma função de execução Criar a função do Lambda Implantar o código da função Criar o acionador do Amazon S3 Testar a função do Lambda Limpe os recursos Próximas etapas Tutorial: Usar um acionador do Amazon S3 para invocar uma função do Lambda Neste tutorial, você usará o console para criar uma função do Lambda e configurar um acionador para um bucket do Amazon Simple Storage Service (Amazon S3). Toda vez que você adiciona um objeto ao bucket do Amazon S3, sua função é executada e envia o tipo de objeto ao Amazon CloudWatch Logs. Este tutorial demonstra como: Crie um bucket do Amazon S3. Crie uma função do Lambda que retorne o tipo de objeto de um bucket do Amazon S3. Configure um acionador do Lambda que invoque a função quando forem carregados objetos para o bucket. Teste sua função, primeiro com um evento fictício e depois usando o acionador. Ao concluir essas etapas, você aprenderá a configurar uma função do Lambda a ser executada sempre que forem adicionados ou excluídos objetos de um bucket do Amazon S3. Você pode concluir este tutorial usando somente o Console de gerenciamento da AWS. Criar um bucket do Amazon S3 Como criar um bucket do Amazon S3 Abra o console do Amazon S3 e selecione a página Buckets de uso geral . Selecione a Região da AWS mais próxima de sua localização geográfica. É possível alterar a região usando a lista suspensa na parte superior da tela. Mais adiante no tutorial, você deverá criar sua função do Lambda na mesma região. Escolha Criar bucket . Em General configuration (Configuração geral), faça o seguinte: Em Tipo de bucket , certifique-se de que Uso geral esteja selecionado. Em Nome do bucket , insira um nome global exclusivo que atenda às regras de nomenclatura de buckets do Amazon S3. Os nomes dos buckets podem conter apenas letras minúsculas, números, pontos (.) e hifens (-). Deixe todas as outras opções com seus valores padrão e escolha Criar bucket . Carregar um objeto de teste em um bucket Para carregar um objeto de teste Abra a página Buckets do console do Amazon S3 e escolha o bucket que você criou durante a etapa anterior. Escolha Carregar . Escolha Adicionar arquivos e selecione o objeto que deseja de carregar. Você pode selecionar qualquer arquivo (por exemplo, HappyFace.jpg ). Selecione Abrir e Carregar . Posteriormente no tutorial, você testará a função do Lambda usando esse objeto. Criação de uma política de permissões Crie uma política de permissões que permita ao Lambda obter objetos de um bucket do Amazon S3 e gravar no Amazon CloudWatch Logs. Para criar a política Abra a página Policies (Políticas) do console do IAM. Escolha Create Policy . Escolha a guia JSON e cole a política personalizada a seguir no editor JSON. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Escolha Próximo: etiquetas . Selecione Próximo: revisar . No campo Política de revisão , em Nome da política, insira s3-trigger-tutorial . Escolha Criar política . Criar uma função de execução Um perfil de execução é um perfil do AWS Identity and Access Management (IAM) que concede a uma função do Lambda permissão para acessar recursos e Serviços da AWS. Nesta etapa, crie um perfil de execução usando a política de permissões que criou na etapa anterior. Para criar uma função de execução e vincular a política de permissões personalizada Abra a página Funções no console do IAM. Selecione Criar perfil . Para o tipo de entidade confiável, escolha Serviço da AWS e, em seguida, para o caso de uso, selecione Lambda . Escolha Próximo . Na caixa de pesquisa de política, insira s3-trigger-tutorial . Nos resultados da pesquisa, selecione a política que você criou ( s3-trigger-tutorial ) e, depois, escolha Next (Avançar). Em Role details (Detalhes do perfil), para Role name (Nome do perfil), insira lambda-s3-trigger-role e, em seguida, escolha Create role (Criar perfil). Criar a função do Lambda Crie uma função do Lambda no console usando o runtime do Python 3.13. Para criar a função do Lambda Abra a página Funções do console do Lambda. Verifique se você está trabalhando na mesma Região da AWS em que criou o bucket do Amazon S3. Você pode alterar sua região usando a lista suspensa na parte superior da tela. Escolha a opção Criar função . Escolher Criar do zero Em Basic information (Informações básicas), faça o seguinte: Em Nome da função , inserir s3-trigger-tutorial Em Runtime , selecione Python 3.13 . Em Architecture (Arquitetura), escolha x86_64 . Na guia Alterar função de execução padrão , faça o seguinte: Expanda a guia e escolha Usar uma função existente . Selecione a lambda-s3-trigger-role que você criou anteriormente. Escolha Criar função . Implantar o código da função Este tutorial usa o runtime do Python 3.13, mas também fornecemos exemplos de arquivos de código para outros runtimes. Você pode selecionar a guia na caixa a seguir para ver o código do runtime do seu interesse. A função do Lambda vai recuperar o nome da chave do objeto carregado e o nome do bucket do parâmetro event que receberá do Amazon S3. Em seguida, a função usará o método get_object do AWS SDK para Python (Boto3) para recuperar os metadados do objeto, incluindo o tipo de conteúdo (tipo MIME) do objeto carregado. Para implantar o código da função Escolha a guia Python na caixa a seguir e copie o código. .NET SDK para .NET nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK para Go V2 nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK para Java 2.x nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK para JavaScript (v3) nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consumir um evento do S3 com o Lambda usando TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK para PHP nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Como consumir um evento do S3 com o Lambda usando PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK para Python (Boto3). nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK para Ruby nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Como consumir um evento do S3 com o Lambda usando Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK para Rust nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } No painel de Origem do código no console do Lambda, cole o código no editor de código, substituindo o código criado pelo Lambda. Na seção DEPLOY , escolha Implantar para atualizar o código da função: Criar o acionador do Amazon S3 Para criar o acionador do Amazon S3 No painel Visão geral da função , escolha Adicionar gatilho . Selecione S3 . Em Bucket , selecione o bucket que você criou anteriormente no tutorial. Em Tipos de eventos , garanta que a opção Todos os eventos de criação de objetos esteja selecionada. Em Invocação recursiva , marque a caixa de seleção para confirmar que não é recomendável usar o mesmo bucket do Amazon S3 para entrada e saída. Escolha Adicionar . nota Quando você cria um acionador do Amazon S3 para uma função do Lambda usando o console do Lambda, o Amazon S3 configura uma notificação de evento no bucket que você especificar. Antes de configurar essa notificação de evento, o Amazon S3 executa uma série de verificações para confirmar que o destino do evento existe e tem as políticas do IAM necessárias. O Amazon S3 também executa esses testes em qualquer outra notificação de evento configurada para esse bucket. Por causa dessa verificação, se o bucket tiver destinos de eventos previamente configurados para recursos que não existem mais ou para recursos que não têm as políticas de permissões necessárias, o Amazon S3 não poderá criar a nova notificação de evento. Você verá a seguinte mensagem de erro indicando que não foi possível criar seu acionador: An error occurred when creating the trigger: Unable to validate the following destination configurations. Você poderá ver esse erro se tiver configurado anteriormente um acionador para outra função do Lambda usando o mesmo bucket e, desde então, tiver excluído a função ou modificado suas políticas de permissões. Testar sua função do Lambda com um evento fictício Para testar a função do Lambda com um evento fictício Na página de console do Lambda da sua função, escolha a guia Testar . Em Nome do evento , insira MyTestEvent . Em Evento JSON , cole o seguinte evento de teste. Não se esqueça de substituir estes valores: Substitua us-east-1 pela região em que você criou o bucket do Amazon S3. Substitua ambas as instâncias de amzn-s3-demo-bucket pelo nome do seu próprio bucket do Amazon S3. Substitua test%2FKey pelo nome do objeto de teste que você carregou anteriormente para o bucket (por exemplo, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Escolha Salvar . Escolha Test (Testar). Se a função for executada com êxito, você verá uma saída semelhante à seguinte na guia Resultados da execução . Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Testar a função do Lambda com o acionador do Amazon S3 Para testar a função com o gatilho configurado, carregue um objeto para o bucket do Amazon S3 usando o console. Para verificar se a função do Lambda foi executada conforme planejado, use o CloudWatch Logs para visualizar a saída da sua função. Para carregar um objeto para o bucket do Amazon S3 Abra a página Buckets do console do Amazon S3 e escolha o bucket que você criou anteriormente. Escolha Carregar . Escolha Adicionar arquivos e use o seletor de arquivos para escolher um objeto que você deseje carregar. Esse objeto pode ser qualquer arquivo que você escolher. Selecione Abrir e Carregar . Para verificar a invocação da função usando o CloudWatch Logs Abra o console do CloudWatch . Verifique se você está trabalhando na mesma Região da AWS em que criou a função do Lambda. Você pode alterar sua região usando a lista suspensa na parte superior da tela. Escolha Logs e depois escolha Grupos de logs . Escolha o nome do grupo de logs para sua função ( /aws/lambda/s3-trigger-tutorial ). Em Fluxos de logs , escolha o fluxo de logs mais recente. Se sua função tiver sido invocada corretamente em resposta ao gatilho do Amazon S3, você verá uma saída semelhante à seguinte. O CONTENT TYPE que você vê depende do tipo de arquivo que você carregou no bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Limpe os recursos Agora você pode excluir os recursos criados para este tutorial, a menos que queira mantê-los. Excluindo os recursos da AWS que você não está mais usando, você evita cobranças desnecessárias em sua Conta da AWS. Como excluir a função do Lambda Abra a página Functions (Funções) no console do Lambda. Selecione a função que você criou. Selecione Ações , Excluir . Digite confirm no campo de entrada de texto e escolha Delete (Excluir). Para excluir a função de execução Abra a página Roles (Funções) no console do IAM. Selecione a função de execução que você criou. Escolha Excluir . Insira o nome do perfil no campo de entrada de texto e escolha Delete (Excluir). Para excluir o bucket do S3 Abra o console Amazon S3 . Selecione o bucket que você criou. Escolha Excluir . Insira o nome do bucket no campo de entrada de texto. Escolha Excluir bucket . Próximas etapas No Tutorial: Usar um acionador do Amazon S3 para criar imagens em miniatura , o gatilho do Amazon S3 invoca uma função para criar uma imagem em miniatura para cada arquivo de imagem que é carregado para um bucket. Este tutorial requer um nível moderado de conhecimento de domínios da AWS e do Lambda. Ele demonstra como criar recursos usando a AWS Command Line Interface (AWS CLI) e como criar um arquivo .zip de pacote de implantação de arquivamento para a sua função e suas dependências. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento S3 Tutorial: Usar um acionador do Amazon S3 para criar miniaturas Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:30:35 |
https://www.youtube.com/watch?v=nOmfIox_a4E | SuperHappyDevHouse 34 Interviews, Part One - YouTube 정보 보도자료 저작권 문의하기 크리에이터 광고 개발자 약관 개인정보처리방침 정책 및 안전 YouTube 작동의 원리 새로운 기능 테스트하기 © 2026 Google LLC, Sundar Pichai, 1600 Amphitheatre Parkway, Mountain View CA 94043, USA, 0807-882-594 (무료), yt-support-solutions-kr@google.com, 호스팅: Google LLC, 사업자정보 , 불법촬영물 신고 크리에이터들이 유튜브 상에 게시, 태그 또는 추천한 상품들은 판매자들의 약관에 따라 판매됩니다. 유튜브는 이러한 제품들을 판매하지 않으며, 그에 대한 책임을 지지 않습니다. var ytInitialData = {"responseContext":{"serviceTrackingParams":[{"service":"CSI","params":[{"key":"c","value":"WEB"},{"key":"cver","value":"2.20260109.01.00"},{"key":"yt_li","value":"0"},{"key":"GetWatchNext_rid","value":"0x9f544a0ef11089ea"}]},{"service":"GFEEDBACK","params":[{"key":"logged_in","value":"0"},{"key":"visitor_data","value":"Cgs2UFVtdGJsRWR3WSi5oZjLBjIKCgJLUhIEGgAgHGLfAgrcAjE1LllUPUl4MWxjVWRTNWY1ajgyd1hwcDN3WmUzVkhCRHlSVEk3blVYZVFzal83OG41SGNWYnp4dVdBdWdnTkhGZGVqMXpmRmlRcXpKb3FjUzdmS2xBSjc1cFBJZVVqTVJuUm9XSmhRd0w0cDhKdE5aS1M4YmZuX09oYTktSlhUM1hMcFFmQjh2QUtrT05BU04wT08zQkw2QzVqVzJObm5WVnJNcE5XU3pfcUtmbXhvZi1kRVl2QXZwcG9pR0lsbXRpaERONjJ4eHkxZUhGVHJXdnJIOWpzVWF1TV92VW1teHJLNE1UR2FMVVVObWl3MjRqVVAtVmpBWmJmTHY2WFNrc3ZaWUQwUmZDSVNkQklyd1g5ajVaZUtwODlrRzM2aXRoLVNIY0tqWHZScTBWVTJNNEoxTjZBMFRIYnU3d211YnA1Zk5MT09vX3pHbFFDT1ZNVUVZaS1WVVh3QQ%3D%3D"}]},{"service":"GUIDED_HELP","params":[{"key":"logged_in","value":"0"}]},{"service":"ECATCHER","params":[{"key":"client.version","value":"2.20260109"},{"key":"client.name","value":"WEB"}]}],"mainAppWebResponseContext":{"loggedOut":true,"trackingParam":"kx_fmPxhoPZRjnLrH47Oo1X0EnuQY9uUgpG-ktjF0c2MAkHRgkussh7BwOcCE59TDtslLKPQ-SS"},"webResponseContextExtensionData":{"webResponseContextPreloadData":{"preloadMessageNames":["twoColumnWatchNextResults","results","videoPrimaryInfoRenderer","videoViewCountRenderer","menuRenderer","menuServiceItemRenderer","segmentedLikeDislikeButtonViewModel","likeButtonViewModel","toggleButtonViewModel","buttonViewModel","modalWithTitleAndButtonRenderer","buttonRenderer","dislikeButtonViewModel","unifiedSharePanelRenderer","menuFlexibleItemRenderer","videoSecondaryInfoRenderer","videoOwnerRenderer","subscribeButtonRenderer","subscriptionNotificationToggleButtonRenderer","menuPopupRenderer","confirmDialogRenderer","metadataRowContainerRenderer","metadataRowRenderer","compositeVideoPrimaryInfoRenderer","itemSectionRenderer","continuationItemRenderer","secondaryResults","lockupViewModel","thumbnailViewModel","thumbnailOverlayBadgeViewModel","thumbnailBadgeViewModel","thumbnailHoverOverlayToggleActionsViewModel","lockupMetadataViewModel","decoratedAvatarViewModel","avatarViewModel","contentMetadataViewModel","sheetViewModel","listViewModel","listItemViewModel","badgeViewModel","autoplay","playerOverlayRenderer","menuNavigationItemRenderer","watchNextEndScreenRenderer","endScreenVideoRenderer","thumbnailOverlayTimeStatusRenderer","thumbnailOverlayNowPlayingRenderer","playerOverlayAutoplayRenderer","playerOverlayVideoDetailsRenderer","autoplaySwitchButtonRenderer","quickActionsViewModel","decoratedPlayerBarRenderer","multiMarkersPlayerBarRenderer","chapterRenderer","notificationActionRenderer","speedmasterEduViewModel","engagementPanelSectionListRenderer","engagementPanelTitleHeaderRenderer","sortFilterSubMenuRenderer","sectionListRenderer","adsEngagementPanelContentRenderer","chipBarViewModel","chipViewModel","macroMarkersListRenderer","macroMarkersInfoItemRenderer","macroMarkersListItemRenderer","toggleButtonRenderer","structuredDescriptionContentRenderer","videoDescriptionHeaderRenderer","factoidRenderer","viewCountFactoidRenderer","expandableVideoDescriptionBodyRenderer","horizontalCardListRenderer","richListHeaderRenderer","videoDescriptionTranscriptSectionRenderer","videoDescriptionInfocardsSectionRenderer","desktopTopbarRenderer","topbarLogoRenderer","fusionSearchboxRenderer","topbarMenuButtonRenderer","multiPageMenuRenderer","hotkeyDialogRenderer","hotkeyDialogSectionRenderer","hotkeyDialogSectionOptionRenderer","voiceSearchDialogRenderer","cinematicContainerRenderer"]},"ytConfigData":{"visitorData":"Cgs2UFVtdGJsRWR3WSi5oZjLBjIKCgJLUhIEGgAgHGLfAgrcAjE1LllUPUl4MWxjVWRTNWY1ajgyd1hwcDN3WmUzVkhCRHlSVEk3blVYZVFzal83OG41SGNWYnp4dVdBdWdnTkhGZGVqMXpmRmlRcXpKb3FjUzdmS2xBSjc1cFBJZVVqTVJuUm9XSmhRd0w0cDhKdE5aS1M4YmZuX09oYTktSlhUM1hMcFFmQjh2QUtrT05BU04wT08zQkw2QzVqVzJObm5WVnJNcE5XU3pfcUtmbXhvZi1kRVl2QXZwcG9pR0lsbXRpaERONjJ4eHkxZUhGVHJXdnJIOWpzVWF1TV92VW1teHJLNE1UR2FMVVVObWl3MjRqVVAtVmpBWmJmTHY2WFNrc3ZaWUQwUmZDSVNkQklyd1g5ajVaZUtwODlrRzM2aXRoLVNIY0tqWHZScTBWVTJNNEoxTjZBMFRIYnU3d211YnA1Zk5MT09vX3pHbFFDT1ZNVUVZaS1WVVh3QQ%3D%3D","rootVisualElementType":3832},"webPrefetchData":{"navigationEndpoints":[{"clickTrackingParams":"CAAQg2ciEwjzuM2lmoiSAxXCTDgFHdf7I80yDHJlbGF0ZWQtYXV0b0iB1_3jqOTn9JwBmgEFCAMQ-B3KAQRyVtlR","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA\u0026pp=QAFIAQ%3D%3D","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","params":"EAEYAdoBBAgBKgA%3D","playerParams":"QAFIAQ%3D%3D","watchEndpointSupportedPrefetchConfig":{"prefetchHintConfig":{"prefetchPriority":0,"countdownUiRelativeSecondsPrefetchCondition":-3}}}},{"clickTrackingParams":"CAAQg2ciEwjzuM2lmoiSAxXCTDgFHdf7I80yDHJlbGF0ZWQtYXV0b0iB1_3jqOTn9JwBmgEFCAMQ-B3KAQRyVtlR","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA\u0026pp=QAFIAQ%3D%3D","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","params":"EAEYAdoBBAgBKgA%3D","playerParams":"QAFIAQ%3D%3D","watchEndpointSupportedPrefetchConfig":{"prefetchHintConfig":{"prefetchPriority":0,"countdownUiRelativeSecondsPrefetchCondition":-3}}}},{"clickTrackingParams":"CAAQg2ciEwjzuM2lmoiSAxXCTDgFHdf7I80yDHJlbGF0ZWQtYXV0b0iB1_3jqOTn9JwBmgEFCAMQ-B3KAQRyVtlR","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA\u0026pp=QAFIAQ%3D%3D","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","params":"EAEYAdoBBAgBKgA%3D","playerParams":"QAFIAQ%3D%3D","watchEndpointSupportedPrefetchConfig":{"prefetchHintConfig":{"prefetchPriority":0,"countdownUiRelativeSecondsPrefetchCondition":-3}}}}]},"hasDecorated":true}},"contents":{"twoColumnWatchNextResults":{"results":{"results":{"contents":[{"videoPrimaryInfoRenderer":{"title":{"runs":[{"text":"SuperHappyDevHouse 34 Interviews, Part One"}]},"viewCount":{"videoViewCountRenderer":{"viewCount":{"simpleText":"조회수 997회"},"shortViewCount":{"simpleText":"조회수 997회"},"originalViewCount":"0"}},"videoActions":{"menuRenderer":{"items":[{"menuServiceItemRenderer":{"text":{"runs":[{"text":"신고"}]},"icon":{"iconType":"FLAG"},"serviceEndpoint":{"clickTrackingParams":"CMMCEMyrARgAIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","showEngagementPanelEndpoint":{"identifier":{"tag":"PAabuse_report"},"globalConfiguration":{"params":"qgdxCAESC25PbWZJb3hfYTRFGmBFZ3R1VDIxbVNXOTRYMkUwUlVBQldBQjRCWklCTWdvd0VpNW9kSFJ3Y3pvdkwya3VlWFJwYldjdVkyOXRMM1pwTDI1UGJXWkpiM2hmWVRSRkwyUmxabUYxYkhRdWFuQm4%3D"},"engagementPanelPresentationConfigs":{"engagementPanelPopupPresentationConfig":{"popupType":"PANEL_POPUP_TYPE_DIALOG"}}}},"trackingParams":"CMMCEMyrARgAIhMI87jNpZqIkgMVwkw4BR3X-yPN"}}],"trackingParams":"CMMCEMyrARgAIhMI87jNpZqIkgMVwkw4BR3X-yPN","topLevelButtons":[{"segmentedLikeDislikeButtonViewModel":{"likeButtonViewModel":{"likeButtonViewModel":{"toggleButtonViewModel":{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"LIKE","title":"6","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CM4CEKVBIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},{"innertubeCommand":{"clickTrackingParams":"CM4CEKVBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"simpleText":"동영상이 마음에 드시나요?"},"content":{"simpleText":"로그인하여 의견을 알려주세요."},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CM8CEPqGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko\u0026hl=ko\u0026ec=66426","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CM8CEPqGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/like"}},"likeEndpoint":{"status":"LIKE","target":{"videoId":"nOmfIox_a4E"},"likeParams":"Cg0KC25PbWZJb3hfYTRFIAAyCwi6oZjLBhCTv9gp"}},"idamTag":"66426"}},"trackingParams":"CM8CEPqGBCITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}}}}}}}]}},"accessibilityText":"다른 사용자 6명과 함께 이 동영상에 좋아요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CM4CEKVBIhMI87jNpZqIkgMVwkw4BR3X-yPN","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.like.button","tooltip":"이 동영상이 마음에 듭니다."}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"LIKE","title":"7","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CM0CEKVBIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},{"innertubeCommand":{"clickTrackingParams":"CM0CEKVBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/removelike"}},"likeEndpoint":{"status":"INDIFFERENT","target":{"videoId":"nOmfIox_a4E"},"removeLikeParams":"Cg0KC25PbWZJb3hfYTRFGAAqCwi6oZjLBhCbxdkp"}}}]}},"accessibilityText":"다른 사용자 6명과 함께 이 동영상에 좋아요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CM0CEKVBIhMI87jNpZqIkgMVwkw4BR3X-yPN","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.like.button","tooltip":"좋아요 취소"}},"identifier":"watch-like","trackingParams":"CMMCEMyrARgAIhMI87jNpZqIkgMVwkw4BR3X-yPN","isTogglingDisabled":true}},"likeStatusEntityKey":"EgtuT21mSW94X2E0RSA-KAE%3D","likeStatusEntity":{"key":"EgtuT21mSW94X2E0RSA-KAE%3D","likeStatus":"INDIFFERENT"}}},"dislikeButtonViewModel":{"dislikeButtonViewModel":{"toggleButtonViewModel":{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"DISLIKE","title":"싫어요","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CMsCEKiPCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}},{"innertubeCommand":{"clickTrackingParams":"CMsCEKiPCSITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"simpleText":"동영상이 마음에 안 드시나요?"},"content":{"simpleText":"로그인하여 의견을 알려주세요."},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CMwCEPmGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko\u0026hl=ko\u0026ec=66425","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CMwCEPmGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/dislike"}},"likeEndpoint":{"status":"DISLIKE","target":{"videoId":"nOmfIox_a4E"},"dislikeParams":"Cg0KC25PbWZJb3hfYTRFEAAiCwi6oZjLBhC3ltsp"}},"idamTag":"66425"}},"trackingParams":"CMwCEPmGBCITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}}}}}}}]}},"accessibilityText":"동영상에 싫어요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CMsCEKiPCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ==","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.dislike.button","tooltip":"이 동영상이 마음에 들지 않습니다."}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"DISLIKE","title":"싫어요","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CMoCEKiPCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}},{"innertubeCommand":{"clickTrackingParams":"CMoCEKiPCSITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/removelike"}},"likeEndpoint":{"status":"INDIFFERENT","target":{"videoId":"nOmfIox_a4E"},"removeLikeParams":"Cg0KC25PbWZJb3hfYTRFGAAqCwi6oZjLBhCVy9sp"}}}]}},"accessibilityText":"동영상에 싫어요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CMoCEKiPCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ==","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.dislike.button","tooltip":"이 동영상이 마음에 들지 않습니다."}},"trackingParams":"CMMCEMyrARgAIhMI87jNpZqIkgMVwkw4BR3X-yPN","isTogglingDisabled":true}},"dislikeEntityKey":"EgtuT21mSW94X2E0RSA-KAE%3D"}},"iconType":"LIKE_ICON_TYPE_UNKNOWN","likeCountEntity":{"key":"unset_like_count_entity_key"},"dynamicLikeCountUpdateData":{"updateStatusKey":"like_count_update_status_key","placeholderLikeCountValuesKey":"like_count_placeholder_values_key","updateDelayLoopId":"like_count_update_delay_loop_id","updateDelaySec":5},"teasersOrderEntityKey":"EgtuT21mSW94X2E0RSD8AygB"}},{"buttonViewModel":{"iconName":"SHARE","title":"공유","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CMgCEOWWARgCIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},{"innertubeCommand":{"clickTrackingParams":"CMgCEOWWARgCIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/share/get_share_panel"}},"shareEntityServiceEndpoint":{"serializedShareEntity":"CgtuT21mSW94X2E0RaABAQ%3D%3D","commands":[{"clickTrackingParams":"CMgCEOWWARgCIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","openPopupAction":{"popup":{"unifiedSharePanelRenderer":{"trackingParams":"CMkCEI5iIhMI87jNpZqIkgMVwkw4BR3X-yPN","showLoadingSpinner":true}},"popupType":"DIALOG","beReused":true}}]}}}]}},"accessibilityText":"공유","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CMgCEOWWARgCIhMI87jNpZqIkgMVwkw4BR3X-yPN","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE","accessibilityId":"id.video.share.button","tooltip":"공유"}}],"accessibility":{"accessibilityData":{"label":"추가 작업"}},"flexibleItems":[{"menuFlexibleItemRenderer":{"menuItem":{"menuServiceItemRenderer":{"text":{"runs":[{"text":"저장"}]},"icon":{"iconType":"PLAYLIST_ADD"},"serviceEndpoint":{"clickTrackingParams":"CMYCEOuQCSITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"runs":[{"text":"나중에 다시 보고 싶으신가요?"}]},"content":{"runs":[{"text":"로그인하여 동영상을 재생목록에 추가하세요."}]},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CMcCEPuGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko%26next%3D%252Fwatch%253Fv%253DnOmfIox_a4E\u0026hl=ko\u0026ec=66427","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CMcCEPuGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=nOmfIox_a4E","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"nOmfIox_a4E","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr1---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=9ce99f228c7f6b81\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc=\u0026pxtags=Cg4KAnR4Egg1MTY2NjQ2Mw\u0026rxtags=Cg4KAnR4Egg1MTY2NjQ2Mw%2CCg4KAnR4Egg1MTY2NjQ2NA%2CCg4KAnR4Egg1MTY2NjQ2NQ%2CCg4KAnR4Egg1MTY2NjQ2Ng%2CCg4KAnR4Egg1MTY2NjQ2Nw"}}}}},"idamTag":"66427"}},"trackingParams":"CMcCEPuGBCITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}}}}}},"trackingParams":"CMYCEOuQCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}},"topLevelButton":{"buttonViewModel":{"iconName":"PLAYLIST_ADD","title":"저장","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CMQCEOuQCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}},{"innertubeCommand":{"clickTrackingParams":"CMQCEOuQCSITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"runs":[{"text":"나중에 다시 보고 싶으신가요?"}]},"content":{"runs":[{"text":"로그인하여 동영상을 재생목록에 추가하세요."}]},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CMUCEPuGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko%26next%3D%252Fwatch%253Fv%253DnOmfIox_a4E\u0026hl=ko\u0026ec=66427","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CMUCEPuGBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=nOmfIox_a4E","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"nOmfIox_a4E","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr1---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=9ce99f228c7f6b81\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc=\u0026pxtags=Cg4KAnR4Egg1MTY2NjQ2Mw\u0026rxtags=Cg4KAnR4Egg1MTY2NjQ2Mw%2CCg4KAnR4Egg1MTY2NjQ2NA%2CCg4KAnR4Egg1MTY2NjQ2NQ%2CCg4KAnR4Egg1MTY2NjQ2Ng%2CCg4KAnR4Egg1MTY2NjQ2Nw"}}}}},"idamTag":"66427"}},"trackingParams":"CMUCEPuGBCITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}}}}}}}]}},"accessibilityText":"재생목록에 저장","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CMQCEOuQCSITCPO4zaWaiJIDFcJMOAUd1_sjzQ==","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","tooltip":"저장"}}}}]}},"trackingParams":"CMMCEMyrARgAIhMI87jNpZqIkgMVwkw4BR3X-yPN","dateText":{"simpleText":"2009. 8. 23."},"relativeDateText":{"accessibility":{"accessibilityData":{"label":"16년 전"}},"simpleText":"16년 전"}}},{"videoSecondaryInfoRenderer":{"owner":{"videoOwnerRenderer":{"thumbnail":{"thumbnails":[{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s48-c-k-c0x00ffffff-no-rj","width":48,"height":48},{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s88-c-k-c0x00ffffff-no-rj","width":88,"height":88},{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s176-c-k-c0x00ffffff-no-rj","width":176,"height":176}]},"title":{"runs":[{"text":"Dave Briccetti","navigationEndpoint":{"clickTrackingParams":"CMICEOE5IhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"/@DaveBriccetti","webPageType":"WEB_PAGE_TYPE_CHANNEL","rootVe":3611,"apiUrl":"/youtubei/v1/browse"}},"browseEndpoint":{"browseId":"UCsvS1__wPMXEPbtFzgpX3nQ","canonicalBaseUrl":"/@DaveBriccetti"}}}]},"subscriptionButton":{"type":"FREE"},"navigationEndpoint":{"clickTrackingParams":"CMICEOE5IhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"/@DaveBriccetti","webPageType":"WEB_PAGE_TYPE_CHANNEL","rootVe":3611,"apiUrl":"/youtubei/v1/browse"}},"browseEndpoint":{"browseId":"UCsvS1__wPMXEPbtFzgpX3nQ","canonicalBaseUrl":"/@DaveBriccetti"}},"subscriberCountText":{"accessibility":{"accessibilityData":{"label":"구독자 2.51천명"}},"simpleText":"구독자 2.51천명"},"trackingParams":"CMICEOE5IhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"subscribeButton":{"subscribeButtonRenderer":{"buttonText":{"runs":[{"text":"구독"}]},"subscribed":false,"enabled":true,"type":"FREE","channelId":"UCsvS1__wPMXEPbtFzgpX3nQ","showPreferences":false,"subscribedButtonText":{"runs":[{"text":"구독중"}]},"unsubscribedButtonText":{"runs":[{"text":"구독"}]},"trackingParams":"CLQCEJsrIhMI87jNpZqIkgMVwkw4BR3X-yPNKPgdMgV3YXRjaA==","unsubscribeButtonText":{"runs":[{"text":"구독 취소"}]},"subscribeAccessibility":{"accessibilityData":{"label":"Dave Briccetti을(를) 구독합니다."}},"unsubscribeAccessibility":{"accessibilityData":{"label":"Dave Briccetti을(를) 구독 취소합니다."}},"notificationPreferenceButton":{"subscriptionNotificationToggleButtonRenderer":{"states":[{"stateId":3,"nextStateId":3,"state":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"icon":{"iconType":"NOTIFICATIONS_NONE"},"accessibility":{"label":"현재 설정은 맞춤설정 알림 수신입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."},"trackingParams":"CMECEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","accessibilityData":{"accessibilityData":{"label":"현재 설정은 맞춤설정 알림 수신입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."}}}}},{"stateId":0,"nextStateId":0,"state":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"icon":{"iconType":"NOTIFICATIONS_OFF"},"accessibility":{"label":"현재 설정은 알림 수신 안함입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."},"trackingParams":"CMACEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","accessibilityData":{"accessibilityData":{"label":"현재 설정은 알림 수신 안함입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."}}}}}],"currentStateId":3,"trackingParams":"CLkCEJf5ASITCPO4zaWaiJIDFcJMOAUd1_sjzQ==","command":{"clickTrackingParams":"CLkCEJf5ASITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandExecutorCommand":{"commands":[{"clickTrackingParams":"CLkCEJf5ASITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","openPopupAction":{"popup":{"menuPopupRenderer":{"items":[{"menuServiceItemRenderer":{"text":{"simpleText":"맞춤설정"},"icon":{"iconType":"NOTIFICATIONS_NONE"},"serviceEndpoint":{"clickTrackingParams":"CL8CEOy1BBgDIhMI87jNpZqIkgMVwkw4BR3X-yPNMhJQUkVGRVJFTkNFX0RFRkFVTFTKAQRyVtlR","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/notification/modify_channel_preference"}},"modifyChannelNotificationPreferenceEndpoint":{"params":"ChhVQ3N2UzFfX3dQTVhFUGJ0RnpncFgzblESAggBGAAgBFITCgIIAxILbk9tZklveF9hNEUYAA%3D%3D"}},"trackingParams":"CL8CEOy1BBgDIhMI87jNpZqIkgMVwkw4BR3X-yPN","isSelected":true}},{"menuServiceItemRenderer":{"text":{"simpleText":"없음"},"icon":{"iconType":"NOTIFICATIONS_OFF"},"serviceEndpoint":{"clickTrackingParams":"CL4CEO21BBgEIhMI87jNpZqIkgMVwkw4BR3X-yPNMhtQUkVGRVJFTkNFX05PX05PVElGSUNBVElPTlPKAQRyVtlR","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/notification/modify_channel_preference"}},"modifyChannelNotificationPreferenceEndpoint":{"params":"ChhVQ3N2UzFfX3dQTVhFUGJ0RnpncFgzblESAggDGAAgBFITCgIIAxILbk9tZklveF9hNEUYAA%3D%3D"}},"trackingParams":"CL4CEO21BBgEIhMI87jNpZqIkgMVwkw4BR3X-yPN","isSelected":false}},{"menuServiceItemRenderer":{"text":{"runs":[{"text":"구독 취소"}]},"icon":{"iconType":"PERSON_MINUS"},"serviceEndpoint":{"clickTrackingParams":"CLoCENuLChgFIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CLoCENuLChgFIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","openPopupAction":{"popup":{"confirmDialogRenderer":{"trackingParams":"CLsCEMY4IhMI87jNpZqIkgMVwkw4BR3X-yPN","dialogMessages":[{"runs":[{"text":"Dave Briccetti"},{"text":" 구독을 취소하시겠습니까?"}]}],"confirmButton":{"buttonRenderer":{"style":"STYLE_BLUE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"구독 취소"}]},"serviceEndpoint":{"clickTrackingParams":"CL0CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNMgV3YXRjaMoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/subscription/unsubscribe"}},"unsubscribeEndpoint":{"channelIds":["UCsvS1__wPMXEPbtFzgpX3nQ"],"params":"CgIIAxILbk9tZklveF9hNEUYAA%3D%3D"}},"accessibility":{"label":"구독 취소"},"trackingParams":"CL0CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"cancelButton":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"취소"}]},"accessibility":{"label":"취소"},"trackingParams":"CLwCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"primaryIsCancel":false}},"popupType":"DIALOG"}}]}},"trackingParams":"CLoCENuLChgFIhMI87jNpZqIkgMVwkw4BR3X-yPN"}}]}},"popupType":"DROPDOWN"}}]}},"targetId":"notification-bell","secondaryIcon":{"iconType":"EXPAND_MORE"}}},"targetId":"watch-subscribe","signInEndpoint":{"clickTrackingParams":"CLQCEJsrIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"simpleText":"채널을 구독하시겠습니까?"},"content":{"simpleText":"채널을 구독하려면 로그인하세요."},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CLgCEP2GBCITCPO4zaWaiJIDFcJMOAUd1_sjzTIJc3Vic2NyaWJlygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko%26next%3D%252Fwatch%253Fv%253DnOmfIox_a4E%26continue_action%3DQUFFLUhqbHdyRHpOUHA4SXNGblRSOVFkNDZwM0Y5MVNmZ3xBQ3Jtc0tsaFlybWt1ZnRfYlNGNWFqNVlDV19EcFZWUkZVNEJNdDRvWjQ0QkxGR1Q1eU0wcm1YYmZZRjNySTA4bGJiblR0T2JqaS1zQi1QbEdVUTFnTnNVRVgwNjdGYVZVQnI2S2I5b3dLV3dLUFBRbk9BSGFRbzNVSk5EeThFMmo5YVZWMGVrWEw3Rnhnc1JEYmZJeGgxZThHYVpfUGd1ckZwLUc4UXE2eVNEMVNCYm5zUEpyajhHbmtBTEg3dk1QV3FyRElWcjBvWkU\u0026hl=ko\u0026ec=66429","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CLgCEP2GBCITCPO4zaWaiJIDFcJMOAUd1_sjzcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=nOmfIox_a4E","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"nOmfIox_a4E","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr1---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=9ce99f228c7f6b81\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc=\u0026pxtags=Cg4KAnR4Egg1MTY2NjQ2Mw\u0026rxtags=Cg4KAnR4Egg1MTY2NjQ2Mw%2CCg4KAnR4Egg1MTY2NjQ2NA%2CCg4KAnR4Egg1MTY2NjQ2NQ%2CCg4KAnR4Egg1MTY2NjQ2Ng%2CCg4KAnR4Egg1MTY2NjQ2Nw"}}}}},"continueAction":"QUFFLUhqbHdyRHpOUHA4SXNGblRSOVFkNDZwM0Y5MVNmZ3xBQ3Jtc0tsaFlybWt1ZnRfYlNGNWFqNVlDV19EcFZWUkZVNEJNdDRvWjQ0QkxGR1Q1eU0wcm1YYmZZRjNySTA4bGJiblR0T2JqaS1zQi1QbEdVUTFnTnNVRVgwNjdGYVZVQnI2S2I5b3dLV3dLUFBRbk9BSGFRbzNVSk5EeThFMmo5YVZWMGVrWEw3Rnhnc1JEYmZJeGgxZThHYVpfUGd1ckZwLUc4UXE2eVNEMVNCYm5zUEpyajhHbmtBTEg3dk1QV3FyRElWcjBvWkU","idamTag":"66429"}},"trackingParams":"CLgCEP2GBCITCPO4zaWaiJIDFcJMOAUd1_sjzQ=="}}}}}},"subscribedEntityKey":"EhhVQ3N2UzFfX3dQTVhFUGJ0RnpncFgzblEgMygB","onSubscribeEndpoints":[{"clickTrackingParams":"CLQCEJsrIhMI87jNpZqIkgMVwkw4BR3X-yPNKPgdMgV3YXRjaMoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/subscription/subscribe"}},"subscribeEndpoint":{"channelIds":["UCsvS1__wPMXEPbtFzgpX3nQ"],"params":"EgIIAxgAIgtuT21mSW94X2E0RQ%3D%3D"}}],"onUnsubscribeEndpoints":[{"clickTrackingParams":"CLQCEJsrIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CLQCEJsrIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","openPopupAction":{"popup":{"confirmDialogRenderer":{"trackingParams":"CLUCEMY4IhMI87jNpZqIkgMVwkw4BR3X-yPN","dialogMessages":[{"runs":[{"text":"Dave Briccetti"},{"text":" 구독을 취소하시겠습니까?"}]}],"confirmButton":{"buttonRenderer":{"style":"STYLE_BLUE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"구독 취소"}]},"serviceEndpoint":{"clickTrackingParams":"CLcCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNKPgdMgV3YXRjaMoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/subscription/unsubscribe"}},"unsubscribeEndpoint":{"channelIds":["UCsvS1__wPMXEPbtFzgpX3nQ"],"params":"CgIIAxILbk9tZklveF9hNEUYAA%3D%3D"}},"accessibility":{"label":"구독 취소"},"trackingParams":"CLcCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"cancelButton":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"취소"}]},"accessibility":{"label":"취소"},"trackingParams":"CLYCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"primaryIsCancel":false}},"popupType":"DIALOG"}}]}}]}},"metadataRowContainer":{"metadataRowContainerRenderer":{"rows":[{"metadataRowRenderer":{"title":{"runs":[{"text":"라이선스"}]},"contents":[{"runs":[{"text":"크리에이티브 커먼즈 저작자 표시 라이선스(재사용 허용)","navigationEndpoint":{"clickTrackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"https://www.youtube.com/t/creative_commons","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"urlEndpoint":{"url":"https://www.youtube.com/t/creative_commons"}}}]}],"trackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPN"}}],"collapsedItemCount":0,"trackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"showMoreText":{"simpleText":"...더보기"},"showLessText":{"simpleText":"간략히"},"trackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPN","defaultExpanded":false,"descriptionCollapsedLines":3,"showMoreCommand":{"clickTrackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandExecutorCommand":{"commands":[{"clickTrackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","changeEngagementPanelVisibilityAction":{"targetId":"engagement-panel-structured-description","visibility":"ENGAGEMENT_PANEL_VISIBILITY_EXPANDED"}},{"clickTrackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","scrollToEngagementPanelCommand":{"targetId":"engagement-panel-structured-description"}}]}},"showLessCommand":{"clickTrackingParams":"CLMCEM2rARgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","changeEngagementPanelVisibilityAction":{"targetId":"engagement-panel-structured-description","visibility":"ENGAGEMENT_PANEL_VISIBILITY_HIDDEN"}},"attributedDescription":{"content":"Interviews with programmers and hardware hackers from SuperHappyDevHouse 34. Joel Franusic introduces SuperHappyDevHouse and talks about how he learned programming from his dad, an embedded systems engineer. Joshua Neal shows an LED connected to an Arduino board. Jens Andersson shows a program, Colors!, for drawing on the Nintendo DS. Otavio Good shows a polar bear drawing he made using that program. Steve Okay shows an Arduino-controlled robot he built, and describes being inspired by the movie Tron to stay up all night and make a light cycle game. Ben McGraw talks about programming role playing games. Caroline Ratajski talks about how she started at age 9 with a BASIC text game, then learned web development and networking, and continued with a formal computer science education leading to her current work in communications signals analysis.","styleRuns":[{"startIndex":0,"length":854,"styleRunExtensions":{"styleRunColorMapExtension":{"colorMap":[{"key":"USER_INTERFACE_THEME_DARK","value":4294967295},{"key":"USER_INTERFACE_THEME_LIGHT","value":4279440147}]}},"fontFamilyName":"Roboto"}]},"headerRuns":[{"startIndex":0,"length":854,"headerMapping":"ATTRIBUTED_STRING_HEADER_MAPPING_UNSPECIFIED"}]}},{"compositeVideoPrimaryInfoRenderer":{}},{"itemSectionRenderer":{"contents":[{"continuationItemRenderer":{"trigger":"CONTINUATION_TRIGGER_ON_ITEM_SHOWN","continuationEndpoint":{"clickTrackingParams":"CLICELsvGAMiEwjzuM2lmoiSAxXCTDgFHdf7I83KAQRyVtlR","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/next"}},"continuationCommand":{"token":"Eg0SC25PbWZJb3hfYTRFGAYyJSIRIgtuT21mSW94X2E0RTAAeAJCEGNvbW1lbnRzLXNlY3Rpb24%3D","request":"CONTINUATION_REQUEST_TYPE_WATCH_NEXT"}}}}],"trackingParams":"CLICELsvGAMiEwjzuM2lmoiSAxXCTDgFHdf7I80=","sectionIdentifier":"comment-item-section","targetId":"comments-section"}}],"trackingParams":"CLECELovIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},"secondaryResults":{"secondaryResults":{"results":[{"lockupViewModel":{"contentImage":{"thumbnailViewModel":{"image":{"sources":[{"url":"https://i.ytimg.com/vi/Dr3CcSPRunA/hqdefault.jpg?sqp=-oaymwEiCKgBEF5IWvKriqkDFQgBFQAAAAAYASUAAMhCPQCAokN4AQ==\u0026rs=AOn4CLB95YoOhuP-cFoixNOY97OU1Zs7pg","width":168,"height":94},{"url":"https://i.ytimg.com/vi/Dr3CcSPRunA/hqdefault.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=\u0026rs=AOn4CLBitVQ0nkW21VmTQYCB6N5B-8mW4g","width":336,"height":188}]},"overlays":[{"thumbnailOverlayBadgeViewModel":{"thumbnailBadges":[{"thumbnailBadgeViewModel":{"text":"7:49","badgeStyle":"THUMBNAIL_OVERLAY_BADGE_STYLE_DEFAULT","animationActivationTargetId":"Dr3CcSPRunA","animationActivationEntityKey":"Eh8veW91dHViZS9hcHAvd2F0Y2gvcGxheWVyX3N0YXRlIMMCKAE%3D","lottieData":{"url":"https://www.gstatic.com/youtube/img/lottie/audio_indicator/audio_indicator_v2.json","settings":{"loop":true,"autoplay":true}},"animatedText":"지금 재생 중","animationActivationEntitySelectorType":"THUMBNAIL_BADGE_ANIMATION_ENTITY_SELECTOR_TYPE_PLAYER_STATE","rendererContext":{"accessibilityContext":{"label":"7분 49초"}}}}],"position":"THUMBNAIL_OVERLAY_BADGE_POSITION_BOTTOM_END"}},{"thumbnailHoverOverlayToggleActionsViewModel":{"buttons":[{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"WATCH_LATER","onTap":{"innertubeCommand":{"clickTrackingParams":"CLACEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"addedVideoId":"Dr3CcSPRunA","action":"ACTION_ADD_VIDEO"}]}}},"accessibilityText":"나중에 볼 동영상","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CLACEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","onTap":{"innertubeCommand":{"clickTrackingParams":"CK8CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"action":"ACTION_REMOVE_VIDEO_BY_VIDEO_ID","removedVideoId":"Dr3CcSPRunA"}]}}},"accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CK8CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"CKgCENTEDBgAIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"ADD_TO_QUEUE_TAIL","onTap":{"innertubeCommand":{"clickTrackingParams":"CK4CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CK4CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","addToPlaylistCommand":{"openMiniplayer":false,"openListPanel":true,"videoId":"Dr3CcSPRunA","listType":"PLAYLIST_EDIT_LIST_TYPE_QUEUE","onCreateListCommand":{"clickTrackingParams":"CK4CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/playlist/create"}},"createPlaylistServiceEndpoint":{"videoIds":["Dr3CcSPRunA"],"params":"CAQ%3D"}},"videoIds":["Dr3CcSPRunA"],"videoCommand":{"clickTrackingParams":"CK4CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr3---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=0ebdc27123d1ba70\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc=\u0026pxtags=Cg4KAnR4Egg1MTY2NjQ2Mw\u0026rxtags=Cg4KAnR4Egg1MTY2NjQ2Mw%2CCg4KAnR4Egg1MTY2NjQ2NA%2CCg4KAnR4Egg1MTY2NjQ2NQ%2CCg4KAnR4Egg1MTY2NjQ2Ng%2CCg4KAnR4Egg1MTY2NjQ2Nw"}}}}}}}]}}},"accessibilityText":"현재 재생목록에 추가","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CK4CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CK0CEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"CKgCENTEDBgAIhMI87jNpZqIkgMVwkw4BR3X-yPN"}}]}}]}},"metadata":{"lockupMetadataViewModel":{"title":{"content":"SuperHappyDevHouse 34 Interviews, Part Two"},"image":{"decoratedAvatarViewModel":{"avatar":{"avatarViewModel":{"image":{"sources":[{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s68-c-k-c0x00ffffff-no-rj","width":68,"height":68}]},"avatarImageSize":"AVATAR_SIZE_M"}},"a11yLabel":"Dave Briccetti 채널로 이동","rendererContext":{"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CKgCENTEDBgAIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"/@DaveBriccetti","webPageType":"WEB_PAGE_TYPE_CHANNEL","rootVe":3611,"apiUrl":"/youtubei/v1/browse"}},"browseEndpoint":{"browseId":"UCsvS1__wPMXEPbtFzgpX3nQ","canonicalBaseUrl":"/@DaveBriccetti"}}}}}}},"metadata":{"contentMetadataViewModel":{"metadataRows":[{"metadataParts":[{"text":{"content":"Dave Briccetti"}}]},{"metadataParts":[{"text":{"content":"조회수 651회"}},{"text":{"content":"16년 전"}}]}],"delimiter":" • "}},"menuButton":{"buttonViewModel":{"iconName":"MORE_VERT","onTap":{"innertubeCommand":{"clickTrackingParams":"CKkCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","showSheetCommand":{"panelLoadingStrategy":{"inlineContent":{"sheetViewModel":{"content":{"listViewModel":{"listItems":[{"listItemViewModel":{"title":{"content":"현재 재생목록에 추가"},"leadingImage":{"sources":[{"clientResource":{"imageName":"ADD_TO_QUEUE_TAIL"}}]},"rendererContext":{"loggingContext":{"loggingDirectives":{"trackingParams":"CKwCEP6YBBgAIhMI87jNpZqIkgMVwkw4BR3X-yPN","visibility":{"types":"12"}}},"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CKwCEP6YBBgAIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CKwCEP6YBBgAIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","addToPlaylistCommand":{"openMiniplayer":true,"videoId":"Dr3CcSPRunA","listType":"PLAYLIST_EDIT_LIST_TYPE_QUEUE","onCreateListCommand":{"clickTrackingParams":"CKwCEP6YBBgAIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/playlist/create"}},"createPlaylistServiceEndpoint":{"videoIds":["Dr3CcSPRunA"],"params":"CAQ%3D"}},"videoIds":["Dr3CcSPRunA"],"videoCommand":{"clickTrackingParams":"CKwCEP6YBBgAIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr3---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=0ebdc27123d1ba70\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc=\u0026pxtags=Cg4KAnR4Egg1MTY2NjQ2Mw\u0026rxtags=Cg4KAnR4Egg1MTY2NjQ2Mw%2CCg4KAnR4Egg1MTY2NjQ2NA%2CCg4KAnR4Egg1MTY2NjQ2NQ%2CCg4KAnR4Egg1MTY2NjQ2Ng%2CCg4KAnR4Egg1MTY2NjQ2Nw"}}}}}}}]}}}}}}},{"listItemViewModel":{"title":{"content":"재생목록에 저장"},"leadingImage":{"sources":[{"clientResource":{"imageName":"BOOKMARK_BORDER"}}]},"rendererContext":{"loggingContext":{"loggingDirectives":{"trackingParams":"CKsCEJSsCRgBIhMI87jNpZqIkgMVwkw4BR3X-yPN","visibility":{"types":"12"}}},"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CKsCEJSsCRgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko\u0026hl=ko","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CKsCEJSsCRgBIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","showSheetCommand":{"panelLoadingStrategy":{"requestTemplate":{"panelId":"PAadd_to_playlist","params":"-gYNCgtEcjNDY1NQUnVuQQ%3D%3D"}}}}}}}}}}},{"listItemViewModel":{"title":{"content":"공유"},"leadingImage":{"sources":[{"clientResource":{"imageName":"SHARE"}}]},"rendererContext":{"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CKkCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/share/get_share_panel"}},"shareEntityServiceEndpoint":{"serializedShareEntity":"CgtEcjNDY1NQUnVuQQ%3D%3D","commands":[{"clickTrackingParams":"CKkCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","openPopupAction":{"popup":{"unifiedSharePanelRenderer":{"trackingParams":"CKoCEI5iIhMI87jNpZqIkgMVwkw4BR3X-yPN","showLoadingSpinner":true}},"popupType":"DIALOG","beReused":true}}]}}}}}}}]}}}}}}}},"accessibilityText":"추가 작업","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CKkCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TEXT","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}}}},"contentId":"Dr3CcSPRunA","contentType":"LOCKUP_CONTENT_TYPE_VIDEO","rendererContext":{"loggingContext":{"loggingDirectives":{"trackingParams":"CKgCENTEDBgAIhMI87jNpZqIkgMVwkw4BR3X-yPN","visibility":{"types":"12"}}},"accessibilityContext":{"label":"SuperHappyDevHouse 34 Interviews, Part Two 7분 49초"},"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CKgCENTEDBgAIhMI87jNpZqIkgMVwkw4BR3X-yPNMgdyZWxhdGVkSIHX_eOo5Of0nAGaAQUIARD4HcoBBHJW2VE=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","nofollow":true,"watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr3---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=0ebdc27123d1ba70\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc=\u0026pxtags=Cg4KAnR4Egg1MTY2NjQ2Mw\u0026rxtags=Cg4KAnR4Egg1MTY2NjQ2Mw%2CCg4KAnR4Egg1MTY2NjQ2NA%2CCg4KAnR4Egg1MTY2NjQ2NQ%2CCg4KAnR4Egg1MTY2NjQ2Ng%2CCg4KAnR4Egg1MTY2NjQ2Nw"}}}}}}}}}},{"lockupViewModel":{"contentImage":{"thumbnailViewModel":{"image":{"sources":[{"url":"https://i.ytimg.com/vi/BGsBjCQA8Xw/hqdefault.jpg?v=696592a2\u0026sqp=-oaymwEiCKgBEF5IWvKriqkDFQgBFQAAAAAYASUAAMhCPQCAokN4AQ==\u0026rs=AOn4CLBX-CVP5j2qNsBpuk2SXYNcrChtXQ","width":168,"height":94},{"url":"https://i.ytimg.com/vi/BGsBjCQA8Xw/hqdefault.jpg?v=696592a2\u0026sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=\u0026rs=AOn4CLD-17SGodDsLPAivamP9XxWNrd57w","width":336,"height":188}]},"overlays":[{"thumbnailOverlayBadgeViewModel":{"thumbnailBadges":[{"thumbnailBadgeViewModel":{"icon":{"sources":[{"clientResource":{"imageName":"LIVE"}}]},"text":"라이브","badgeStyle":"THUMBNAIL_OVERLAY_BADGE_STYLE_LIVE","animationActivationTargetId":"BGsBjCQA8Xw","animationActivationEntityKey":"Eh8veW91dHViZS9hcHAvd2F0Y2gvcGxheWVyX3N0YXRlIMMCKAE%3D","lottieData":{"url":"https://www.gstatic.com/youtube/img/lottie/audio_indicator/audio_indicator_v2.json","settings":{"loop":true,"autoplay":true}},"animatedText":"지금 재생 중","animationActivationEntitySelectorType":"THUMBNAIL_BADGE_ANIMATION_ENTITY_SELECTOR_TYPE_PLAYER_STATE"}}],"position":"THUMBNAIL_OVERLAY_BADGE_POSITION_BOTTOM_END"}},{"thumbnailHoverOverlayToggleActionsViewModel":{"buttons":[{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"WATCH_LATER","onTap":{"innertubeCommand":{"clickTrackingParams":"CKcCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"addedVideoId":"BGsBjCQA8Xw","action":"ACTION_ADD_VIDEO"}]}}},"accessibilityText":"나중에 볼 동영상","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CKcCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","onTap":{"innertubeCommand":{"clickTrackingParams":"CKYCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"action":"ACTION_REMOVE_VIDEO_BY_VIDEO_ID","removedVideoId":"BGsBjCQA8Xw"}]}}},"accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CKYCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPN","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"CJ8CENTEDBgBIhMI87jNpZqIkgMVwkw4BR3X-yPN"}},{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"ADD_TO_QUEUE_TAIL","onTap":{"innertubeCommand":{"clickTrackingParams":"CKUCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CKUCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","addToPlaylistCommand":{"openMiniplayer":false,"openListPanel":true,"videoId":"BGsBjCQA8Xw","listType":"PLAYLIST_EDIT_LIST_TYPE_QUEUE","onCreateListCommand":{"clickTrackingParams":"CKUCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/playlist/create"}},"createPlaylistServiceEndpoint":{"videoIds":["BGsBjCQA8Xw"],"params":"CAQ%3D"}},"videoIds":["BGsBjCQA8Xw"],"videoCommand":{"clickTrackingParams":"CKUCEPBbIhMI87jNpZqIkgMVwkw4BR3X-yPNygEEclbZUQ==","commandMetadata":{"webCommandMetadata":{"url":" | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/general/latest/gr/lambda-service.html | AWS Lambda endpoints and quotas - AWS General Reference AWS Lambda endpoints and quotas - AWS General Reference Documentation Reference guide Service endpoints Service quotas AWS Lambda endpoints and quotas To connect programmatically to an AWS service, you use an endpoint. AWS services offer the following endpoint types in some or all of the AWS Regions that the service supports: IPv4 endpoints, dual-stack endpoints, and FIPS endpoints. Some services provide global endpoints. For more information, see AWS service endpoints . Service quotas, also referred to as limits, are the maximum number of service resources or operations for your AWS account. For more information, see AWS service quotas . The following are the service endpoints and service quotas for this service. Service endpoints Region Name Region Endpoint Protocol US East (Ohio) us-east-2 lambda.us-east-2.amazonaws.com lambda-fips.us-east-2.amazonaws.com lambda.us-east-2.api.aws HTTPS HTTPS HTTPS US East (N. Virginia) us-east-1 lambda.us-east-1.amazonaws.com lambda-fips.us-east-1.amazonaws.com lambda.us-east-1.api.aws HTTPS HTTPS HTTPS US West (N. California) us-west-1 lambda.us-west-1.amazonaws.com lambda-fips.us-west-1.amazonaws.com lambda.us-west-1.api.aws HTTPS HTTPS HTTPS US West (Oregon) us-west-2 lambda.us-west-2.amazonaws.com lambda-fips.us-west-2.amazonaws.com lambda.us-west-2.api.aws HTTPS HTTPS HTTPS Africa (Cape Town) af-south-1 lambda.af-south-1.amazonaws.com lambda.af-south-1.api.aws HTTPS HTTPS Asia Pacific (Hong Kong) ap-east-1 lambda.ap-east-1.amazonaws.com lambda.ap-east-1.api.aws HTTPS HTTPS Asia Pacific (Hyderabad) ap-south-2 lambda.ap-south-2.amazonaws.com lambda.ap-south-2.api.aws HTTPS HTTPS Asia Pacific (Jakarta) ap-southeast-3 lambda.ap-southeast-3.amazonaws.com lambda.ap-southeast-3.api.aws HTTPS HTTPS Asia Pacific (Malaysia) ap-southeast-5 lambda.ap-southeast-5.amazonaws.com lambda.ap-southeast-5.api.aws HTTPS HTTPS Asia Pacific (Melbourne) ap-southeast-4 lambda.ap-southeast-4.amazonaws.com lambda.ap-southeast-4.api.aws HTTPS HTTPS Asia Pacific (Mumbai) ap-south-1 lambda.ap-south-1.amazonaws.com lambda.ap-south-1.api.aws HTTPS HTTPS Asia Pacific (New Zealand) ap-southeast-6 lambda.ap-southeast-6.amazonaws.com lambda.ap-southeast-6.api.aws HTTPS HTTPS Asia Pacific (Osaka) ap-northeast-3 lambda.ap-northeast-3.amazonaws.com lambda.ap-northeast-3.api.aws HTTPS HTTPS Asia Pacific (Seoul) ap-northeast-2 lambda.ap-northeast-2.amazonaws.com lambda.ap-northeast-2.api.aws HTTPS HTTPS Asia Pacific (Singapore) ap-southeast-1 lambda.ap-southeast-1.amazonaws.com lambda.ap-southeast-1.api.aws HTTPS HTTPS Asia Pacific (Sydney) ap-southeast-2 lambda.ap-southeast-2.amazonaws.com lambda.ap-southeast-2.api.aws HTTPS HTTPS Asia Pacific (Taipei) ap-east-2 lambda.ap-east-2.amazonaws.com lambda.ap-east-2.api.aws HTTPS HTTPS Asia Pacific (Thailand) ap-southeast-7 lambda.ap-southeast-7.amazonaws.com lambda.ap-southeast-7.api.aws HTTPS HTTPS Asia Pacific (Tokyo) ap-northeast-1 lambda.ap-northeast-1.amazonaws.com lambda.ap-northeast-1.api.aws HTTPS HTTPS Canada (Central) ca-central-1 lambda.ca-central-1.amazonaws.com lambda.ca-central-1.api.aws HTTPS HTTPS Canada West (Calgary) ca-west-1 lambda.ca-west-1.amazonaws.com lambda.ca-west-1.api.aws HTTPS HTTPS Europe (Frankfurt) eu-central-1 lambda.eu-central-1.amazonaws.com lambda.eu-central-1.api.aws HTTPS HTTPS Europe (Ireland) eu-west-1 lambda.eu-west-1.amazonaws.com lambda.eu-west-1.api.aws HTTPS HTTPS Europe (London) eu-west-2 lambda.eu-west-2.amazonaws.com lambda.eu-west-2.api.aws HTTPS HTTPS Europe (Milan) eu-south-1 lambda.eu-south-1.amazonaws.com lambda.eu-south-1.api.aws HTTPS HTTPS Europe (Paris) eu-west-3 lambda.eu-west-3.amazonaws.com lambda.eu-west-3.api.aws HTTPS HTTPS Europe (Spain) eu-south-2 lambda.eu-south-2.amazonaws.com lambda.eu-south-2.api.aws HTTPS HTTPS Europe (Stockholm) eu-north-1 lambda.eu-north-1.amazonaws.com lambda.eu-north-1.api.aws HTTPS HTTPS Europe (Zurich) eu-central-2 lambda.eu-central-2.amazonaws.com lambda.eu-central-2.api.aws HTTPS HTTPS Israel (Tel Aviv) il-central-1 lambda.il-central-1.amazonaws.com lambda.il-central-1.api.aws HTTPS HTTPS Mexico (Central) mx-central-1 lambda.mx-central-1.amazonaws.com lambda.mx-central-1.api.aws HTTPS HTTPS Middle East (Bahrain) me-south-1 lambda.me-south-1.amazonaws.com lambda.me-south-1.api.aws HTTPS HTTPS Middle East (UAE) me-central-1 lambda.me-central-1.amazonaws.com lambda.me-central-1.api.aws HTTPS HTTPS South America (São Paulo) sa-east-1 lambda.sa-east-1.amazonaws.com lambda.sa-east-1.api.aws HTTPS HTTPS AWS GovCloud (US-East) us-gov-east-1 lambda.us-gov-east-1.amazonaws.com lambda-fips.us-gov-east-1.amazonaws.com lambda.us-gov-east-1.api.aws HTTPS HTTPS HTTPS AWS GovCloud (US-West) us-gov-west-1 lambda.us-gov-west-1.amazonaws.com lambda-fips.us-gov-west-1.amazonaws.com lambda.us-gov-west-1.api.aws HTTPS HTTPS HTTPS Service quotas Important New AWS accounts have reduced concurrency and memory quotas. AWS raises these quotas automatically based on your usage. You can also request a quota increase . Name Default Adjustable Description Asynchronous invocation request throughput on Lambda Managed Instances Each supported Region: 5 Megabytes/Second No The maximum invoke request throughput (MB/s) for asynchronous invocations on Lambda Managed Instances. Asynchronous invocations occur when Lambda functions are invoked by services such as Amazon S3 or Amazon SNS. Asynchronous payload Each supported Region: 1,024 Kilobytes No The maximum size of an incoming asynchronous invocation request. Capacity providers Each supported Region: 1,000 Yes The maximum number of capacity providers that can be created in an account. Concurrency scaling rate Each supported Region: 1,000 No The maximum immediate increase in function concurrency that can occur in response to an increase in traffic or execution duration. Every 10 seconds, each synchronously invoked Lambda function can scale by an additional 1,000 concurrent executions until the total concurrency across your functions in a Region reaches the account concurrency limit. Concurrent executions Each supported Region: 1,000 Count Yes The maximum number of events that functions can process simultaneously in the current Region. Deployment package size (console editor) Each supported Region: 3 Megabytes No The maximum size of a deployment package or layer archive when you upload it through the console editor. Upload larger files with Amazon S3. Deployment package size (direct upload) Each supported Region: 50 Megabytes No The maximum size of a deployment package or layer archive when you upload it directly to Lambda. Upload larger files with Amazon S3. Deployment package size (unzipped) Each supported Region: 250 Megabytes No The maximum size of the contents of a deployment package or layer archive when its unzipped. Durable execution storage written in megabytes Each supported Region: 100 Megabytes No The maximum cumulative amount of data persisted per durable execution, including input and output payloads, checkpoints and error data, measured in megabytes. DynamoDB Event Source Mapping throughput on Lambda Managed Instances Each supported Region: 10 Megabytes/Second No The maximum data throughput (MB/s) per DynamoDB event source mapping on Lambda Managed Instances. Elastic network interfaces per VPC Each supported Region: 500 Yes The maximum number of network interfaces that Lambda creates for a VPC with functions attached. Lambda creates a network interface for each combination of subnet and security group that functions connect to. Environment variable size Each supported Region: 4 Kilobytes No The maximum combined size of environment variables that are configured on a function. File descriptors Each supported Region: 1,024 No The maximum number of file descriptors that a function can have open. Function and layer storage Each supported Region: 75 Gigabytes Yes The amount of storage thats available for deployment packages and layer archives in the current Region. Function invocation rate to initiate durable executions Each supported Region: 300 Yes The maximum number of new durable executions that you can initiate per second. Function layers Each supported Region: 5 No The maximum number of layers that you can add to your function. Function resource-based policy Each supported Region: 20 Kilobytes No The maximum combined size of resource-based policies that are configured on a function. Function timeout Each supported Region: 900 No The maximum timeout that you can configure for a function. Function versions per capacity provider Each supported Region: 100 No The maximum number of function versions per capacity provider. Kafka Event Source Mapping throughput in default mode on Lambda Managed Instances Each supported Region: 1 Megabytes/Second No The maximum data throughput (MB/s) per Kafka event source mapping in default mode on Lambda Managed Instances. Kafka Event Source Mappings in default mode on Lambda Managed Instances Each supported Region: 100 No The maximum number of Kafka event source mappings in default mode on Lambda Managed Instances. Kinesis Event Source Mapping throughput on Lambda Managed Instances Each supported Region: 25 Megabytes/Second No The maximum data throughput (MB/s) per Kinesis event source mapping on Lambda Managed Instances. Maximum number of durable operations per durable execution Each supported Region: 3,000 No The maximum number of durable operations (including customer-initiated operations and automatic retries) that trigger state persistence within a single durable execution. Maximum running durable executions Each supported Region: 1,000,000 Yes The maximum number of durable executions that can be running simultaneously in the current region. Processes and threads Each supported Region: 1,024 No The maximum combined number of processes and threads that a function can have open. Rate of CheckpointDurableExecution API requests Each supported Region: 1,000 Yes The maximum number of CheckpointDurableExecution API requests per second. Rate of GetDurableExecution API requests Each supported Region: 20 Yes The maximum number of GetDurableExecution API requests per second. Rate of GetDurableExecutionHistory API requests Each supported Region: 15 Yes The maximum number of GetDurableExecutionHistory API requests per second. Rate of GetDurableExecutionState API requests Each supported Region: 1,000 Yes The maximum number of GetDurableExecutionState API requests per second. Rate of GetFunction API requests Each supported Region: 100 No The maximum number of GetFunction API requests per second. Rate of GetPolicy API requests Each supported Region: 15 No The maximum number of GetPolicy API requests per second. Rate of ListDurableExecutionsByFunction API requests Each supported Region: 15 Yes The maximum number of ListDurableExecutionsByFunction API requests per second. Rate of SendDurableExecutionCallbackFailure API requests Each supported Region: 300 Yes The maximum number of SendDurableExecutionCallbackFailure API requests per second. Rate of SendDurableExecutionCallbackHeartbeat API requests Each supported Region: 300 Yes The maximum number of SendDurableExecutionCallbackHeartbeat API requests per second. Rate of SendDurableExecutionCallbackSuccess API requests Each supported Region: 300 Yes The maximum number of SendDurableExecutionCallbackSuccess API requests per second. Rate of StopDurableExecution API requests Each supported Region: 30 Yes The maximum number of StopDurableExecution API requests per second. Rate of capacity provider read API requests Each supported Region: 15 No The maximum combined rate (requests per second) for all capacity provider read APIs. Rate of capacity provider write API requests Each supported Region: 1 No The maximum combined rate (requests per second) for all capacity provider write APIs. Rate of control plane API requests (excludes invocation, GetFunction, and GetPolicy requests) Each supported Region: 15 No The maximum number of API requests per second (excluding invocation, GetFunction, and GetPolicy requests). SQS Event Source Mapping throughput in default mode on Lambda Managed Instances Each supported Region: 5 Megabytes/Second No The maximum data throughput (MB/s) per SQS event source mapping in default mode on Lambda Managed Instances. Synchronous payload Each supported Region: 6 Megabytes No The maximum size of an incoming synchronous invocation request or outgoing response. Test events (console editor) Each supported Region: 10 No The maximum amount of test events for a function through the console editor. For more information, see Lambda quotas in the AWS Lambda Developer Guide . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Lake Formation Launch Wizard Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-create-trigger | Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Documentation AWS Lambda Developer Guide Create an Amazon S3 bucket Upload a test object to your bucket Create a permissions policy Create an execution role Create the Lambda function Deploy the function code Create the Amazon S3 trigger Test the Lambda function Clean up your resources Next steps Tutorial: Using an Amazon S3 trigger to invoke a Lambda function In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs. This tutorial demonstrates how to: Create an Amazon S3 bucket. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket. Test your function, first with a dummy event, and then using the trigger. By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console. Create an Amazon S3 bucket To create an Amazon S3 bucket Open the Amazon S3 console and select the General purpose buckets page. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region. Choose Create bucket . Under General configuration , do the following: For Bucket type , ensure General purpose is selected. For Bucket name , enter a globally unique name that meets the Amazon S3 Bucket naming rules . Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-). Leave all other options set to their default values and choose Create bucket . Upload a test object to your bucket To upload a test object Open the Buckets page of the Amazon S3 console and choose the bucket you created during the previous step. Choose Upload . Choose Add files and select the object that you want to upload. You can select any file (for example, HappyFace.jpg ). Choose Open , then choose Upload . Later in the tutorial, you’ll test your Lambda function using this object. Create a permissions policy Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. To create the policy Open the Policies page of the IAM console. Choose Create Policy . Choose the JSON tab, and then paste the following custom policy into the JSON editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choose Next: Tags . Choose Next: Review . Under Review policy , for the policy Name , enter s3-trigger-tutorial . Choose Create policy . Create an execution role An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step. To create an execution role and attach your custom permissions policy Open the Roles page of the IAM console. Choose Create role . For the type of trusted entity, choose AWS service , then for the use case, choose Lambda . Choose Next . In the policy search box, enter s3-trigger-tutorial . In the search results, select the policy that you created ( s3-trigger-tutorial ), and then choose Next . Under Role details , for the Role name , enter lambda-s3-trigger-role , then choose Create role . Create the Lambda function Create a Lambda function in the console using the Python 3.14 runtime. To create the Lambda function Open the Functions page of the Lambda console. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen. Choose Create function . Choose Author from scratch Under Basic information , do the following: For Function name , enter s3-trigger-tutorial For Runtime , choose Python 3.14 . For Architecture , choose x86_64 . In the Change default execution role tab, do the following: Expand the tab, then choose Use an existing role . Select the lambda-s3-trigger-role you created earlier. Choose Create function . Deploy the function code This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in. The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the event parameter it receives from Amazon S3. The function then uses the get_object method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object. To deploy the function code Choose the Python tab in the following box and copy the code. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consuming an S3 event with Lambda using TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } In the Code source pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created. In the DEPLOY section, choose Deploy to update your function's code: Create the Amazon S3 trigger To create the Amazon S3 trigger In the Function overview pane, choose Add trigger . Select S3 . Under Bucket , select the bucket you created earlier in the tutorial. Under Event types , be sure that All object create events is selected. Under Recursive invocation , select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. Choose Add . Note When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an event notification on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket. Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created: An error occurred when creating the trigger: Unable to validate the following destination configurations. You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies. Test your Lambda function with a dummy event To test the Lambda function with a dummy event In the Lambda console page for your function, choose the Test tab. For Event name , enter MyTestEvent . In the Event JSON , paste the following test event. Be sure to replace these values: Replace us-east-1 with the region you created your Amazon S3 bucket in. Replace both instances of amzn-s3-demo-bucket with the name of your own Amazon S3 bucket. Replace test%2FKey with the name of the test object you uploaded to your bucket earlier (for example, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choose Save . Choose Test . If your function runs successfully, you’ll see output similar to the following in the Execution results tab. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test the Lambda function with the Amazon S3 trigger To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output. To upload an object to your Amazon S3 bucket Open the Buckets page of the Amazon S3 console and choose the bucket that you created earlier. Choose Upload . Choose Add files and use the file selector to choose an object you want to upload. This object can be any file you choose. Choose Open , then choose Upload . To verify the function invocation using CloudWatch Logs Open the CloudWatch console. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen. Choose Logs , then choose Log groups . Choose the log group for your function ( /aws/lambda/s3-trigger-tutorial ). Under Log streams , choose the most recent log stream. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The CONTENT TYPE you see depends on the type of file you uploaded to your bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions , Delete . Type confirm in the text input field and choose Delete . To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete . Enter the name of the role in the text input field and choose Delete . To delete the S3 bucket Open the Amazon S3 console. Select the bucket you created. Choose Delete . Enter the name of the bucket in the text input field. Choose Delete bucket . Next steps In Tutorial: Using an Amazon S3 trigger to create thumbnail images , the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions S3 Tutorial: Use an Amazon S3 trigger to create thumbnails Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/pt_br/AmazonCloudFront/latest/DeveloperGuide/GettingStarted.html | Conceitos básicos do CloudFront - Amazon CloudFront Conceitos básicos do CloudFront - Amazon CloudFront Documentação Amazon CloudFront Guia do Desenvolvedor Conceitos básicos do CloudFront Os tópicos desta seção mostram como começar a distribuir seu conteúdo com o Amazon CloudFront. O tópico Configurar a Conta da AWS descreve os pré-requisitos para os tutoriais a seguir, como criar uma Conta da AWS e criar um usuário com acesso administrativo. O tutorial básico de distribuição mostra como configurar o controle de acesso de origem (OAC) para enviar solicitações autenticadas a uma origem do Amazon S3. O tutorial do site estático seguro mostra como criar um site estático seguro para o nome de domínio usando OAC com uma origem do Amazon S3. O tutorial usa um modelo do Amazon CloudFront (CloudFront) para configuração e implantação. Tópicos Configurar a Conta da AWS Conceitos básicos de uma distribuição padrão do CloudFront Conceitos básicos de uma distribuição padrão (AWS CLI) Conceitos básicos de um site estático seguro O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento Como trabalhar com AWS SDKs Configurar a Conta da AWS Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonS3/latest/userguide/bucketnamingrules.html | Reglas de nomenclatura de buckets de uso general - Amazon Simple Storage Service Reglas de nomenclatura de buckets de uso general - Amazon Simple Storage Service Documentación Amazon Simple Storage Service (S3) Guía del usuario Reglas de nomenclatura de los buckets de uso general Nombres de buckets de uso general de ejemplo Prácticas recomendadas Creación de un bucket que utilice un GUID en el nombre del bucket Reglas de nomenclatura de buckets de uso general Cuando cree un bucket de propósito general, asegúrese de tener en cuenta la longitud, los caracteres válidos, el formato y la unicidad de los nombres de bucket. En las siguientes secciones se proporciona información sobre la nomenclatura de buckets de uso general, incluidas las reglas de nomenclatura, las prácticas recomendadas y un ejemplo para crear un bucket de uso general con un nombre que incluya un identificador único global (GUID). Para obtener información sobre los nombres de claves de objeto, consulte Creación de nombres de claves de objeto . Para crear un bucket de propósito general, consulte Creación de un bucket de uso general . Temas Reglas de nomenclatura de los buckets de uso general Nombres de buckets de uso general de ejemplo Prácticas recomendadas Creación de un bucket que utilice un GUID en el nombre del bucket Reglas de nomenclatura de los buckets de uso general Las siguientes reglas de nomenclatura se aplican para buckets de uso general. Los nombres de bucket deben tener entre 3 caracteres (mín.) y 63 caracteres (máx.). Los nombres de bucket pueden consistir únicamente de letras minúsculas, números, puntos ( . ) y guiones ( - ). Los nombres de bucket deben comenzar y terminar con una letra o un número. Los nombres de bucket no deben contener dos puntos adyacentes. Los nombres de bucket no deben tener el formato de una dirección IP (por ejemplo, 192.168.5.4 ). Los nombres de los buckets no deben comenzar con el prefijo xn-- . Los nombres de los buckets no deben comenzar con el prefijo sthree- . Los nombres de los buckets no deben comenzar con el prefijo amzn-s3-demo- . Los nombres de los buckets no deben terminar con el sufijo -s3alias . Este sufijo está reservado para nombres de alias de punto de acceso. Para obtener más información, consulte Alias de punto de acceso . Los nombres de los buckets no deben terminar con el sufijo --ol-s3 . Este sufijo está reservado para nombres de alias de punto de acceso de Object Lambda. Para obtener más información, consulte Cómo usar un alias de estilo de bucket para su punto de acceso de Object Lambda de bucket de S3 . Los nombres de los buckets no deben terminar con el sufijo .mrap . Este sufijo está reservado para nombres de punto de acceso multirregionales. Para obtener más información, consulte Reglas para asignar nombres a los puntos de acceso de varias regiones de Amazon S3 . Los nombres de los buckets no deben terminar con el sufijo --x-s3 . Este sufijo está reservado para buckets de directorio. Para obtener más información, consulte Reglas de nomenclatura de buckets de directorio . Los nombres de los buckets no deben terminar con el sufijo --table-s3 . Este sufijo está reservado para los buckets de Tablas de S3. Para obtener más información, consulte Reglas de nomenclatura de espacios de nombres, tablas y buckets de tablas de Amazon S3 . Los buckets utilizados con Aceleración de transferencias de Amazon S3 no pueden tener puntos ( . ) en los nombres. Para obtener más información acerca de Aceleración de transferencias, consulte Configuración de transferencias de archivos rápidas y seguras con Aceleración de transferencias de Amazon S3 . importante Los nombres de bucket deben ser únicos en todas las Cuentas de AWS de todas las Regiones de AWS de una partición. Una partición es una agrupación de regiones. AWS actualmente tiene tres particiones: aws (regiones comerciales), aws-cn (regiones de China) y aws-us-gov (regiones de AWS GovCloud (US)). Otra Cuenta de AWS de la misma partición no puede utilizar el mismo nombre de bucket hasta que se elimine el bucket. Después de eliminar un bucket, tenga en cuenta que otra Cuenta de AWS en la misma partición puede utilizar el mismo nombre de bucket para un nuevo bucket y, por lo tanto, puede recibir potencialmente solicitudes destinadas al bucket eliminado. Si desea evitar esto o seguir utilizando el mismo nombre de bucket, no elimine el bucket. Le recomendamos que vacíe el bucket y lo conserve y que, en su lugar, bloquee las solicitudes de bucket según sea necesario. En el caso de los buckets que ya no se utilicen activamente, recomendamos vaciar el bucket de todos los objetos para minimizar costos y retener el propio bucket. Cuando cree un bucket de uso general, elija el nombre y la Región de AWS donde se creará. Una vez que haya creado un bucket de uso general, no podrá modificar el nombre ni la región. No incluya información confidencial en el nombre del bucket. El nombre del bucket será visible en las URL que señalan a los objetos almacenados en él. nota Antes del 1 de marzo de 2018, los buckets creados en la región EE. UU. Este (Norte de Virginia) podían tener nombres de hasta 255 caracteres e incluir letras mayúsculas y guiones bajos. A partir del 1 de marzo de 2018, los nuevos buckets de EE. UU. Este (Norte de Virginia) deben ajustarse a las mismas reglas aplicadas en todas las demás regiones. Nombres de buckets de uso general de ejemplo Los siguientes nombres de bucket muestran ejemplos de los caracteres permitidos en los nombres de bucket de uso general: a-z, 0-9 y guiones ( - ). El prefijo reservado amzn-s3-demo- se utiliza aquí solo a modo de ejemplo. Debido a que es un prefijo reservado, no puede crear nombres de bucket que comiencen por amzn-s3-demo- . amzn-s3-demo-bucket1-a1b2c3d4-5678-90ab-cdef-example11111 amzn-s3-demo-bucket Los nombres de bucket de ejemplo siguientes son válidos, pero no se recomiendan para usos distintos del alojamiento estático de sitios web porque contienen puntos ( . ): example.com www.example.com my.example.s3.bucket Los nombres de bucket de ejemplo siguientes no son válidos: amzn_s3_demo_bucket (contiene guiones bajos) AmznS3DemoBucket (contiene letras mayúsculas) amzn-s3-demo-bucket- (comienza con el prefijo amzn-s3-demo- y termina con un guion) example..com (contiene dos puntos seguidos) 192.168.5.4 (coincide con el formato de una dirección IP) Prácticas recomendadas Cuando asigne un nombre a los buckets de uso general, tenga en cuenta las siguientes prácticas recomendadas de nomenclatura de buckets. Elección de un esquema de nomenclatura de buckets que sea poco probable que ocasione conflictos de nomenclatura Si la aplicación crea buckets automáticamente, elija un esquema de nomenclatura de buckets que sea poco probable que ocasione conflictos de nomenclatura. Asegúrese de que la lógica de su aplicación elija un nombre de bucket diferente si un nombre de bucket ya ha sido usado. Incorporación de identificadores únicos globales (GUID) a los nombres de bucket Le recomendamos que cree nombres de bucket que no sean predecibles. No escriba código con la suposición de que el nombre de bucket elegido esté disponible, a menos que ya haya creado el bucket. Un método para crear nombres de bucket que no sean predecibles consiste en adjuntar un identificador único global (GUID) al nombre de bucket, por ejemplo, amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111 . Para obtener más información, consulte Creación de un bucket que utilice un GUID en el nombre del bucket . Evitación del uso de puntos ( . ) en los nombres de bucket Para obtener una mejor compatibilidad, se recomienda evitar el uso de puntos ( . ) en los nombres de bucket, excepto para los buckets que se utilizan únicamente para el alojamiento estático de sitios web. Si incluye puntos en el nombre de un bucket, no puede usar direccionamiento de estilo host virtual a través de HTTPS, a menos que realice una validación de certificado propia. Los certificados de seguridad utilizados para el alojamiento virtual de los buckets no funcionan para los buckets con puntos en el nombre. Esta limitación no afecta a los buckets utilizados para el alojamiento de sitios web estáticos, ya que el alojamiento de sitios web estáticos solo está disponible a través de HTTP. Para obtener más información acerca del direccionamiento de tipo de host virtual, consulte Alojamiento virtual de buckets de uso general . Para obtener más información sobre el alojamiento estático de sitios web, consulte Alojamiento de un sitio web estático mediante Amazon S3 . Elección de un nombre pertinente Al asignar un nombre a un bucket, elija uno que sea pertinente para usted o la empresa. Evite el uso de nombres asociados con otros. Por ejemplo, evite el uso de AWS o Amazon en el nombre de bucket. No elimine buckets para poder reutilizar nombres de bucket Si un bucket está vacío, puede eliminarlo. Después de eliminar un bucket, el nombre vuelve a estar disponible para su reutilización. No obstante, no se garantiza que pueda reutilizar el nombre de inmediato o en absoluto. Después de eliminar un bucket, puede pasar algún tiempo antes de que pueda volver a utilizar el nombre. Además, otra Cuenta de AWS podría crear un bucket con el mismo nombre antes de que pueda reutilizarlo. Después de eliminar un bucket de uso general, tenga en cuenta que otra Cuenta de AWS en la misma partición puede utilizar el mismo nombre de bucket para un nuevo bucket y, por lo tanto, puede recibir potencialmente solicitudes destinadas al bucket de uso general eliminado. Si desea evitar esto o seguir utilizando el mismo nombre de bucket de uso general, no elimine el bucket de uso general. Le recomendamos que vacíe el bucket y lo conserve y que, en su lugar, bloquee las solicitudes de bucket según sea necesario. Creación de un bucket que utilice un GUID en el nombre del bucket Los siguientes ejemplos muestran cómo crear un bucket de uso general que utilice un GUID al final del nombre del bucket. En el siguiente ejemplo de la AWS CLI se crea un bucket de uso general en la región Oeste de EE. UU. (Norte de California) ( us-west-1 ) con un nombre de bucket de ejemplo que utiliza un identificador único global (GUID). Para utilizar este comando de ejemplo, sustituya user input placeholders por su propia información. aws s3api create-bucket \ --bucket amzn-s3-demo-bucket1 $(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \ --region us-west-1 \ --create-bucket-configuration LocationConstraint= us-west-1 En el siguiente ejemplo se muestra cómo crear un bucket con un GUID al final del nombre de bucket en la región Este de EE. UU. (Norte de Virginia) ( us-east-1 ) mediante AWS SDK para Java. Para utilizar este ejemplo, reemplace los user input placeholders con su propia información. Para obtener más información sobre otros SDK de AWS, consulte Herramientas para crear en AWS . import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.Bucket; import com.amazonaws.services.s3.model.CreateBucketRequest; import java.util.List; import java.util.UUID; public class CreateBucketWithUUID { public static void main(String[] args) { final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions. US_EAST_1 ).build(); String bucketName = " amzn-s3-demo-bucket " + UUID.randomUUID().toString().replace("-", ""); CreateBucketRequest createRequest = new CreateBucketRequest(bucketName); System.out.println(bucketName); s3.createBucket(createRequest); } } JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento Patrones de bucket comunes Cuotas, restricciones y limitaciones ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/de_de/lambda/latest/dg/with-s3-example.html | Tutorial: Verwenden eines Amazon-S3-Auslösers zum Aufrufen einer Lambda-Funktion - AWS Lambda Tutorial: Verwenden eines Amazon-S3-Auslösers zum Aufrufen einer Lambda-Funktion - AWS Lambda Dokumentation AWS Lambda Entwicklerhandbuch Erstellen Sie einen Amazon S3 S3-Bucket Hochladen eines Testobjekts in Ihren Bucket Erstellen einer Berechtigungsrichtlinie Erstellen einer Ausführungsrolle So erstellen Sie die Lambda-Funktion: Bereitstellen des Funktionscodes Erstellen des Amazon-S3-Auslösers Lambda-Funktion testen Bereinigen Ihrer Ressourcen Nächste Schritte Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Tutorial: Verwenden eines Amazon-S3-Auslösers zum Aufrufen einer Lambda-Funktion In diesem Tutorial verwenden Sie die Konsole, um eine Lambda-Funktion zu erstellen und einen Auslöser für einen Amazon-Simple-Storage-Service-Bucket (Amazon-S3-Bucket) zu konfigurieren. Jedes Mal, wenn Sie Ihrem Amazon S3 S3-Bucket ein Objekt hinzufügen, wird Ihre Funktion ausgeführt und der Objekttyp wird an Amazon CloudWatch Logs ausgegeben. Dieses Tutorial zeigt, wie man: Erstellen Sie einen Amazon-S3-Bucket. Erstellen Sie eine Lambda-Funktion, die den Objekttyp von Objekten in einem Amazon-S3-Bucket ausgibt. Konfigurieren Sie einen Lambda-Auslöser, der Ihre Funktion aufruft, wenn Objekte in Ihren Bucket hochgeladen werden. Testen Sie Ihre Funktion, zuerst mit einem Dummy-Ereignis und dann mit dem Auslöser. Durch das Ausführen dieser Schritte erfahren Sie, wie Sie eine Lambda-Funktion so konfigurieren, dass sie ausgeführt wird, wenn Objekte einem Amazon-S3-Bucket hinzugefügt oder daraus gelöscht werden. Sie können dieses Tutorial abschließen, indem Sie nur die AWS-Managementkonsole verwenden. Erstellen Sie einen Amazon S3 S3-Bucket So erstellen Sie einen Amazon-S3-Bucket Öffnen Sie die Amazon-S3-Konsole und wählen Sie die Seite Allzweck-Buckets aus. Wählen Sie den, der Ihrem geografischen Standort AWS-Region am nächsten liegt. Sie können Ihre Region mithilfe der Dropdown-Liste am oberen Bildschirmrand ändern. Später im Tutorial müssen Sie eine Lambda-Funktion in derselben Region erstellen. Wählen Sie Create Bucket (Bucket erstellen) aus. Führen Sie unter Allgemeine Konfiguration die folgenden Schritte aus: Stellen Sie sicher, dass für Bucket-Typ die Option Allzweck ausgewählt ist. Geben Sie für den Bucket-Namen einen global eindeutigen Namen ein, der den Regeln für die Bucket-Benennung von Amazon S3 entspricht. Bucket-Namen dürfen nur aus Kleinbuchstaben, Zahlen, Punkten (.) und Bindestrichen (-) bestehen. Belassen Sie alle anderen Optionen auf ihren Standardwerten und wählen Sie Bucket erstellen aus. Hochladen eines Testobjekts in Ihren Bucket Hochladen eines Testobjekts Öffnen Sie die Buckets-Seite der Amazon-S3-Konsole und wählen Sie den Bucket aus, den Sie im vorherigen Schritt erstellt haben. Klicken Sie auf Hochladen . Wählen Sie Dateien hinzufügen aus und wählen Sie das Objekt, das Sie hochladen möchten. Sie können eine beliebige Datei auswählen (z. B. HappyFace.jpg ). Wählen Sie Öffnen und anschließend Hochladen aus. Im weiteren Verlauf des Tutorials werden Sie Ihre Lambda-Funktion mit diesem Objekt testen. Erstellen einer Berechtigungsrichtlinie Erstellen Sie eine Berechtigungsrichtlinie, die es Lambda ermöglicht, Objekte aus einem Amazon S3 S3-Bucket abzurufen und in Amazon CloudWatch Logs zu schreiben. So erstellen Sie die Richtlinie Öffnen Sie die Seite Richtlinien in der IAM-Konsole. Wählen Sie Richtlinie erstellen aus. Wählen Sie die Registerkarte JSON aus und kopieren Sie dann die folgende benutzerdefinierte JSON-Richtlinie in den JSON-Editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Wählen Sie Next: Tags (Weiter: Tags) aus. Klicken Sie auf Weiter: Prüfen . Geben Sie unter Review policy (Richtlinie prüfen) für den Richtlinien- Namen s3-trigger-tutorial ein. Wählen Sie Richtlinie erstellen aus. Erstellen einer Ausführungsrolle Eine Ausführungsrolle ist eine AWS Identity and Access Management (IAM-) Rolle, die einer Lambda-Funktion Zugriff AWS-Services und Ressourcen gewährt. In diesem Schritt erstellen Sie eine Ausführungsrolle unter Verwendung der Berechtigungsrichtlinie, die Sie im vorherigen Schritt erstellt haben. So erstellen Sie eine Ausführungsrolle und fügen Ihre benutzerdefinierte Berechtigungsrichtlinie hinzu Öffnen Sie die Seite Roles (Rollen) in der IAM-Konsole. Wählen Sie Rolle erstellen aus. Wählen Sie als Typ der vertrauenswürdigen Entität AWS -Service und dann als Anwendungsfall Lambda aus. Wählen Sie Weiter aus. Geben Sie im Feld für die Richtliniensuche s3-trigger-tutorial ein. Wählen Sie in den Suchergebnissen die von Ihnen erstellte Richtlinie ( s3-trigger-tutorial ) und dann die Option Next (Weiter) aus. Geben Sie unter Role details (Rollendetails) für den Role name (Rollennamen) lambda-s3-trigger-role ein und wählen Sie dann Create role (Rolle erstellen) aus. So erstellen Sie die Lambda-Funktion: Erstellen Sie mit der Python 3.14-Laufzeit eine Lambda-Funktion in der Konsole. So erstellen Sie die Lambda-Funktion: Öffnen Sie die Seite Funktionen der Lambda-Konsole. Stellen Sie sicher, dass Sie in demselben Modus arbeiten, in dem AWS-Region Sie Ihren Amazon S3 S3-Bucket erstellt haben. Sie können Ihre Region mithilfe der Dropdown-Liste oben auf dem Bildschirm ändern. Wählen Sie Funktion erstellen . Wählen Sie Ohne Vorgabe erstellen aus. Führen Sie unter Basic information (Grundlegende Informationen) die folgenden Schritte aus: Geben Sie unter Funktionsname s3-trigger-tutorial ein. Wählen Sie für Runtime Python 3.14 . Wählen Sie für Architektur x86_64 aus. Gehen Sie auf der Registerkarte Standard-Ausführungsrolle ändern wie folgt vor: Erweitern Sie die Registerkarte und wählen Sie dann Verwenden einer vorhandenen Rolle aus. Wählen Sie die zuvor erstellte lambda-s3-trigger-role aus. Wählen Sie Funktion erstellen . Bereitstellen des Funktionscodes Dieses Tutorial verwendet die Python 3.14-Laufzeit, aber wir haben auch Beispielcodedateien für andere Laufzeiten bereitgestellt. Sie können die Registerkarte im folgenden Feld auswählen, um den Code für die gewünschte Laufzeit anzusehen. Die Lambda-Funktion ruft den Schlüsselnamen des hochgeladenen Objekts und den Namen des Buckets aus dem event -Parameter ab, den sie von Amazon S3 erhält. Die Funktion verwendet dann die Methode get_object von, AWS SDK für Python (Boto3) um die Metadaten des Objekts abzurufen, einschließlich des Inhaltstyps (MIME-Typ) des hochgeladenen Objekts. So stellen Sie den Funktionscode bereit Wählen Sie im folgenden Feld die Registerkarte Python und kopieren Sie den Code. .NET SDK für .NET Anmerkung Es gibt noch mehr dazu. GitHub Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von .NET // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK für Go V2 Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Go // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK für Java 2.x Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Java // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK für JavaScript (v3) Anmerkung Es gibt noch mehr dazu GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Konsumieren eines S3-Ereignisses mit Lambda unter Verwendung JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Konsumieren eines S3-Ereignisses mit Lambda unter Verwendung TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK für PHP Anmerkung Es gibt noch mehr dazu. GitHub Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK für Python (Boto3) Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Python # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK für Ruby Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK für Rust Anmerkung Es gibt noch mehr dazu GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Rust // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } Fügen Sie den Code im Code-Quell bereich der Lambda-Konsole in den Code-Editor ein und ersetzen Sie dabei den von Lambda erstellten Code. Wählen Sie im Abschnitt DEPLOY die Option Deploy aus, um den Code Ihrer Funktion zu aktualisieren: Erstellen des Amazon-S3-Auslösers Erstellen des Amazon-S3-Auslösers Wählen Sie im Bereich Function overview (Funktionsübersicht) die Option Add trigger (Auslöser hinzufügen) . Wählen Sie S3 aus. Wählen Sie unter Bucket den Bucket aus, den Sie zuvor im Tutorial erstellt haben. Vergewissern Sie sich, dass unter Ereignistypen die Option Alle Ereignisse zur Objekterstellung ausgewählt ist. Aktivieren Sie unter Rekursiver Aufruf das Kontrollkästchen, um zu bestätigen, dass die Verwendung desselben Amazon-S3-Buckets für die Ein- und Ausgabe nicht empfohlen wird. Wählen Sie Hinzufügen aus. Anmerkung Wenn Sie einen Amazon-S3-Trigger für eine Lambda-Funktion mithilfe der Lambda-Konsole erstellen, konfiguriert Amazon S3 eine Ereignisbenachrichtigung für den von Ihnen angegebenen Bucket. Bevor diese Ereignisbenachrichtigung konfiguriert wird, führt Amazon S3 eine Reihe von Prüfungen durch, um zu bestätigen, dass das Ereignisziel existiert und die erforderlichen IAM-Richtlinien hat. Amazon S3 führt diese Tests auch bei allen anderen für diesen Bucket konfigurierten Ereignisbenachrichtigungen durch. Wenn der Bucket zuvor Ereignisziele für Ressourcen konfiguriert hat, die nicht mehr existieren oder für Ressourcen, die nicht über die erforderlichen Berechtigungsrichtlinien verfügen, kann Amazon S3 aufgrund dieser Prüfung keine neue Ereignisbenachrichtigung erstellen. Es wird die folgende Fehlermeldung angezeigt, die besagt, dass Ihr Auslöser nicht erstellt werden konnte: An error occurred when creating the trigger: Unable to validate the following destination configurations. Dieser Fehler kann auftreten, wenn Sie zuvor einen Auslöser für eine andere Lambda-Funktion konfiguriert haben, die denselben Bucket verwendet und Sie die Funktion inzwischen gelöscht oder ihre Berechtigungsrichtlinien geändert haben. Testen Ihrer Lambda-Funktion mit einem Dummy-Ereignis So testen Sie die Lambda-Funktion mit einem Dummy-Ereignis Wählen Sie auf der Konsolenseite für Ihre Funktion die Registerkarte Test aus. Geben Sie für Event name (Ereignisname) MyTestEvent ein. Fügen Sie im Event-JSON das folgende Testereignis ein. Achten Sie darauf, diese Werte zu ersetzen: Ersetzen Sie us-east-1 durch die Region, in der Sie den Amazon-S3-Bucket erstellt haben. Ersetzen Sie beide Vorkommen von amzn-s3-demo-bucket durch den Namen Ihres eigenen Amazon-S3-Buckets. Ersetzen Sie test%2FKey durch den Namen des Testobjekts, das Sie zuvor in Ihren Bucket hochgeladen haben (z. B. HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Wählen Sie Speichern . Wählen Sie Test aus. Wenn Ihre Funktion erfolgreich ausgeführt wird, wird auf der Registerkarte Ausführungsergebnisse eine Ausgabe angezeigt, die der folgenden ähnelt. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Testen der Lambda-Funktion mit dem Amazon-S3-Auslöser Um Ihre Funktion mit dem konfigurierten Auslöser zu testen, können Sie unter Verwendung der Konsole ein Objekt in Ihren Amazon-S3-Bucket hochladen. Um zu überprüfen, ob Ihre Lambda-Funktion wie erwartet ausgeführt wurde, verwenden Sie CloudWatch Logs, um die Ausgabe Ihrer Funktion einzusehen. So laden Sie Objekte in Ihren Amazon-S3-Bucket hoch Öffnen Sie die Buckets -Seite der Amazon-S3-Konsole und wählen Sie den Bucket aus, den Sie zuvor erstellt haben. Klicken Sie auf Upload . Wählen Sie Dateien hinzufügen aus und wählen Sie mit der Dateiauswahl ein Objekt aus, das Sie hochladen möchten. Bei diesem Objekt kann es sich um eine beliebige von Ihnen ausgewählte Datei handeln. Wählen Sie Öffnen und anschließend Hochladen aus. Um den Funktionsaufruf mithilfe von Logs zu überprüfen CloudWatch Öffnen Sie die CloudWatch-Konsole . Stellen Sie sicher, dass Sie in derselben Umgebung arbeiten, in der AWS-Region Sie Ihre Lambda-Funktion erstellt haben. Sie können Ihre Region mithilfe der Dropdown-Liste oben auf dem Bildschirm ändern. Wählen Sie Protokolle und anschließend Protokollgruppen aus. Wählen Sie die Protokollgruppe für Ihre Funktion ( /aws/lambda/s3-trigger-tutorial ) aus. Wählen Sie im Bereich Protokollstreams den neuesten Protokollstream aus. Wenn Ihre Funktion als Reaktion auf Ihren Amazon-S3-Auslöser korrekt aufgerufen wurde, erhalten Sie eine Ausgabe, die der folgenden ähnelt. Welchen CONTENT TYPE Sie sehen, hängt von der Art der Datei ab, die Sie in Ihren Bucket hochgeladen haben. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Bereinigen Ihrer Ressourcen Sie können jetzt die Ressourcen, die Sie für dieses Tutorial erstellt haben, löschen, es sei denn, Sie möchten sie behalten. Indem Sie AWS Ressourcen löschen, die Sie nicht mehr verwenden, vermeiden Sie unnötige Kosten für Ihre AWS-Konto. So löschen Sie die Lambda-Funktion: Öffnen Sie die Seite Funktionen der Lambda-Konsole. Wählen Sie die Funktion aus, die Sie erstellt haben. Wählen Sie Aktionen , Löschen aus. Geben Sie confirm in das Texteingabefeld ein und wählen Sie Delete (Löschen) aus. So löschen Sie die Ausführungsrolle Öffnen Sie die Seite Roles in der IAM-Konsole. Wählen Sie die von Ihnen erstellte Ausführungsrolle aus. Wählen Sie Löschen aus. Geben Sie den Namen der Rolle in das Texteingabefeld ein und wählen Sie Delete (Löschen) aus. So löschen Sie den S3-Bucket: Öffnen Sie die Amazon S3-Konsole . Wählen Sie den Bucket aus, den Sie erstellt haben. Wählen Sie Löschen aus. Geben Sie den Namen des Buckets in das Texteingabefeld ein. Wählen Sie Bucket löschen aus. Nächste Schritte In Tutorial: Verwenden eines Amazon-S3-Auslösers zum Erstellen von Miniaturbildern ruft der Amazon-S3-Auslöser eine Funktion auf, die für jede Bilddatei, die in einen Bucket hochgeladen wird, ein Miniaturbild erstellt. Dieses Tutorial erfordert ein gewisses Maß AWS an Wissen über Lambda-Domänen. Es zeigt, wie Ressourcen mithilfe von AWS Command Line Interface (AWS CLI) erstellt werden und wie ein Bereitstellungspaket für das ZIP-Dateiarchiv für die Funktion und ihre Abhängigkeiten erstellt wird. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen S3 Tutorial: Verwenden eines Amazon-S3-Auslösers zum Erstellen von Miniaturbildern Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/it_it/AmazonS3/latest/userguide/bucketnamingrules.html | Regole di denominazione dei bucket per uso generico - Amazon Simple Storage Service Regole di denominazione dei bucket per uso generico - Amazon Simple Storage Service Documentazione Amazon Simple Storage Service (S3) Guida per l'utente Regole di denominazione dei bucket per uso generico Esempi di nomi di bucket per uso generico Best practice Creazione di un bucket che utilizza un GUID nel nome del bucket Regole di denominazione dei bucket per uso generico Quando si crea un bucket per uso generico, è necessario considerare la lunghezza, i caratteri validi, la formattazione e l’univocità dei nomi dei bucket. Le sezioni seguenti forniscono informazioni sulla denominazione dei bucket per uso generico, comprese le regole di denominazione, le best practice e un esempio di creazione di un bucket per uso generico con un nome che include un identificatore univoco globale (GUID). Per informazioni sui nomi delle chiavi di oggetti, consulta Creazione di nomi di chiavi di oggetti . Per creare un bucket per uso generico, consulta Creazione di un bucket per uso generico . Argomenti Regole di denominazione dei bucket per uso generico Esempi di nomi di bucket per uso generico Best practice Creazione di un bucket che utilizza un GUID nel nome del bucket Regole di denominazione dei bucket per uso generico Per la denominazione dei bucket per uso generico si applicano le seguenti regole. I nomi dei bucket devono avere una lunghezza compresa tra 3 (minimo) e 63 (massimo) caratteri. I nomi dei bucket possono essere costituiti solo da lettere minuscole, numeri, punti ( . ) e trattini ( - ). I nomi dei bucket devono iniziare e terminare con una lettera o un numero. I nomi dei bucket non devono contenere punti adiacenti. I nomi dei bucket non devono avere il formato di un indirizzo IP (ad esempio 192.168.5.4 ). I nomi dei bucket non devono iniziare con il prefisso xn-- . I nomi dei bucket non devono iniziare con il prefisso sthree- . I nomi dei bucket non devono iniziare con il prefisso amzn-s3-demo- . I nomi dei bucket non devono terminare con il suffisso -s3alias . Questo suffisso è riservato ai nomi alias dei punti di accesso. Per ulteriori informazioni, consulta Alias del punto di accesso . I nomi dei bucket non devono terminare con il suffisso --ol-s3 . Questo suffisso è riservato ai nomi alias dei punti di accesso Lambda per oggetti. Per ulteriori informazioni, consulta Come utilizzare un alias in stile bucket per il punto di accesso Lambda per oggetti del bucket S3 . I nomi dei bucket non devono terminare con il suffisso .mrap . Questo suffisso è riservato ai nomi dei punti di accesso multiregionali. Per ulteriori informazioni, consulta Regole per la denominazione dei punti di accesso multi-regione in Amazon S3 . I nomi dei bucket non devono terminare con il suffisso --x-s3 . Questo suffisso è riservato ai bucket di directory. Per ulteriori informazioni, consulta Regole di denominazione dei bucket di directory . I nomi dei bucket non devono terminare con il suffisso --table-s3 . Questo suffisso è riservato ai bucket di tabelle S3. Per ulteriori informazioni, consulta Regole di denominazione di bucket di tabelle, tabelle e spazio dei nomi di Amazon S3 . I bucket utilizzati con Accelerazione del trasferimento Amazon S3 non possono contenere punti ( . ) nel nome. Per ulteriori informazioni su Transfer Acceleration, consulta Configurazione di trasferimenti veloci e sicuri di file con Amazon S3 Transfer Acceleration . Importante I nomi di bucket devono essere univoci in tutti gli Account AWS e in tutte le Regioni AWS all’interno di una partizione. Una partizione è un raggruppamento di Regioni. AWS attualmente ha tre partizioni: aws (Regioni commerciali), aws-cn (Regioni della Cina) e aws-us-gov (Regioni AWS GovCloud (US)). Il nome di un bucket non può essere utilizzato da un altro Account AWS nella stessa partizione fino a quando il bucket non viene eliminato. Dopo aver eliminato un bucket, è opportuno considerare che un altro Account AWS nella stessa partizione può utilizzare lo stesso nome di bucket per un nuovo bucket e quindi può potenzialmente ricevere le richieste destinate al bucket eliminato. Per evitare che ciò accada o continuare a utilizzare lo stesso nome del bucket, non eliminare il bucket. È consigliabile eliminare il contenuto del bucket e conservarlo, bloccando le richieste di bucket in base alle esigenze. Per i bucket non più in uso, è consigliabile eliminare tutti gli oggetti del bucket per ridurre al minimo i costi e mantenere il bucket stesso. Quando si crea un bucket per uso generico, si scelgono nome e Regione AWS in cui crearlo. Dopo aver creato un bucket per uso generico, non è possibile modificarne il nome o la Regione. Non includere informazioni sensibili nel nome del bucket. Il nome bucket è visibile nell'URL che punta agli oggetti nel bucket. Nota Prima del 1° marzo 2018, i bucket creati nella regione Stati Uniti orientali (Virginia settentrionale) potevano avere nomi lunghi fino a 255 caratteri e con lettere maiuscole e caratteri di sottolineatura. A partire dal 1° marzo 2018, i nuovi bucket nella regione Stati Uniti orientali (Virginia settentrionale) devono essere conformi alle stesse regole applicate in tutte le altre regioni. Esempi di nomi di bucket per uso generico I seguenti sono esempi di nomi di bucket per uso generico con i caratteri consentiti: a-z, 0-9 e trattini ( - ). Il prefisso riservato amzn-s3-demo- viene qui utilizzato solo a scopo illustrativo. Il prefisso è riservato, pertanto non è possibile creare nomi di bucket che iniziano con amzn-s3-demo- . amzn-s3-demo-bucket1-a1b2c3d4-5678-90ab-cdef-example11111 amzn-s3-demo-bucket I seguenti nomi di bucket di esempio sono validi ma non consigliati per usi diversi dall’hosting di siti web statici perché contengono punti ( . ): example.com www.example.com my.example.s3.bucket I nomi dei bucket di esempio seguenti non sono validi: amzn_s3_demo_bucket (contiene caratteri di sottolineatura) AmznS3DemoBucket (contiene lettere maiuscole) amzn-s3-demo-bucket- (inizia con il prefisso amzn-s3-demo- e termina con un trattino) example..com (contiene due punti consecutivi) 192.168.5.4 (corrisponde al formato di un indirizzo IP) Best practice Tieni in considerazione le seguenti best practice per la denominazione dei bucket quando assegni i nomi ai bucket per uso generico. Scegliere uno schema di denominazione dei bucket che difficilmente possa causare conflitti di denominazione Se l’applicazione crea bucket automaticamente, scegli lo schema di denominazione dei bucket con meno probabilità di creare conflitti di denominazione. Assicurati che la logica dell'applicazione scelga un nome del bucket diverso nel caso in cui il nome del bucket sia già in uso. Aggiungere identificatori univoci globali (GUID) ai nomi dei bucket È consigliabile creare nomi di bucket non prevedibili. Non scrivere il codice presupponendo che il nome del bucket scelto sia disponibile a meno il bucket non sia stato personalmente creato. Un metodo per creare nomi di bucket non prevedibili consiste nell’aggiungere un identificatore univoco globale (GUID) al nome del bucket, ad esempio amzn-s3-demo-bucket-a1b2c3d4-5678-90ab-cdef-example11111 . Per ulteriori informazioni, consulta Creazione di un bucket che utilizza un GUID nel nome del bucket . Evitare l’uso di punti ( . ) nei nomi dei bucket Per una migliore compatibilità, è consigliabile evitare l’utilizzo di punti ( . ) nei nomi dei bucket, ad eccezione dei bucket utilizzati solo per l’hosting di siti web statici. Se includi punti nel nome di un bucket, non potrai utilizzare l’indirizzamento in stile host virtuale su HTTPS, a meno che non esegui la convalida del certificato. I certificati di sicurezza utilizzati per l’hosting virtuale dei bucket non funzionano per i bucket con punti nei nomi. Questa limitazione non influisce sui bucket utilizzati per l’hosting di siti web statici, poiché l’hosting di siti web statici è disponibile solo su HTTP. Per ulteriori informazioni sull'indirizzamento in stile hosting virtuale, consulta Hosting virtuale di bucket per uso generico . Per ulteriori informazioni sull'hosting di siti Web statici, consulta Hosting di un sito Web statico tramite Amazon S3 . Scegli un nome pertinente Quando assegni un nome a un bucket, ti consigliamo di scegliere un nome che sia rilevante per te o la tua attività. Evita di utilizzare nomi associati ad altri. Ad esempio, evita di utilizzare AWS o Amazon nel nome del bucket. Non eliminare i bucket in modo da poter riutilizzare i nomi dei bucket Se un bucket è vuoto, puoi eliminarlo. Dopo l'eliminazione di un bucket, il nome diventa disponibile per un nuovo utilizzo. Tuttavia, non è garantito che si possa riutilizzare il nome immediatamente o in futuro. Potrebbe essere necessario qualche minuto prima di poter riutilizzare il nome di un bucket eliminato. Inoltre, prima che tu possa riutilizzare il nome, un altro Account AWS potrebbe creare un bucket con lo stesso nome. Dopo aver eliminato un bucket per uso generico, tieni presente che un altro Account AWS nella stessa partizione può utilizzare lo stesso nome di bucket per un nuovo bucket e quindi può potenzialmente ricevere le richieste destinate al bucket eliminato. Per evitare che ciò accada o continuare a utilizzare lo stesso nome del bucket per uso generico, non eliminare il bucket per uso generico. È consigliabile eliminare il contenuto del bucket e conservarlo, bloccando le richieste di bucket in base alle esigenze. Creazione di un bucket che utilizza un GUID nel nome del bucket Gli esempi seguenti mostrano come creare un bucket per uso generico che utilizza un GUID alla fine del nome del bucket. L’esempio AWS CLI seguente crea un bucket nella Regione Stati Uniti occidentali (California settentrionale) ( us-west-1 ) con un nome di bucket di esempio che utilizza un identificatore univoco globale (GUID). Per utilizzare questo comando di esempio, sostituisci user input placeholders con le tue informazioni. aws s3api create-bucket \ --bucket amzn-s3-demo-bucket1 $(uuidgen | tr -d - | tr '[:upper:]' '[:lower:]' ) \ --region us-west-1 \ --create-bucket-configuration LocationConstraint= us-west-1 L’esempio seguente mostra come creare un bucket con un GUID alla fine del nome del bucket nella Regione Stati Uniti orientali (Virginia settentrionale) ( us-east-1 ) utilizzando AWS SDK per Java. Per utilizzare questo esempio, sostituisci user input placeholders con le informazioni appropriate. Per informazioni sugli altri SDK AWS consulta Strumenti per costruire su AWS . import com.amazonaws.regions.Regions; import com.amazonaws.services.s3.AmazonS3; import com.amazonaws.services.s3.AmazonS3ClientBuilder; import com.amazonaws.services.s3.model.Bucket; import com.amazonaws.services.s3.model.CreateBucketRequest; import java.util.List; import java.util.UUID; public class CreateBucketWithUUID { public static void main(String[] args) { final AmazonS3 s3 = AmazonS3ClientBuilder.standard().withRegion(Regions. US_EAST_1 ).build(); String bucketName = " amzn-s3-demo-bucket " + UUID.randomUUID().toString().replace("-", ""); CreateBucketRequest createRequest = new CreateBucketRequest(bucketName); System.out.println(bucketName); s3.createBucket(createRequest); } } JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti Modelli comuni di bucket Quote, restrizioni e limitazioni Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-create-bucket | Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Documentation AWS Lambda Developer Guide Create an Amazon S3 bucket Upload a test object to your bucket Create a permissions policy Create an execution role Create the Lambda function Deploy the function code Create the Amazon S3 trigger Test the Lambda function Clean up your resources Next steps Tutorial: Using an Amazon S3 trigger to invoke a Lambda function In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs. This tutorial demonstrates how to: Create an Amazon S3 bucket. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket. Test your function, first with a dummy event, and then using the trigger. By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console. Create an Amazon S3 bucket To create an Amazon S3 bucket Open the Amazon S3 console and select the General purpose buckets page. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region. Choose Create bucket . Under General configuration , do the following: For Bucket type , ensure General purpose is selected. For Bucket name , enter a globally unique name that meets the Amazon S3 Bucket naming rules . Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-). Leave all other options set to their default values and choose Create bucket . Upload a test object to your bucket To upload a test object Open the Buckets page of the Amazon S3 console and choose the bucket you created during the previous step. Choose Upload . Choose Add files and select the object that you want to upload. You can select any file (for example, HappyFace.jpg ). Choose Open , then choose Upload . Later in the tutorial, you’ll test your Lambda function using this object. Create a permissions policy Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. To create the policy Open the Policies page of the IAM console. Choose Create Policy . Choose the JSON tab, and then paste the following custom policy into the JSON editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choose Next: Tags . Choose Next: Review . Under Review policy , for the policy Name , enter s3-trigger-tutorial . Choose Create policy . Create an execution role An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step. To create an execution role and attach your custom permissions policy Open the Roles page of the IAM console. Choose Create role . For the type of trusted entity, choose AWS service , then for the use case, choose Lambda . Choose Next . In the policy search box, enter s3-trigger-tutorial . In the search results, select the policy that you created ( s3-trigger-tutorial ), and then choose Next . Under Role details , for the Role name , enter lambda-s3-trigger-role , then choose Create role . Create the Lambda function Create a Lambda function in the console using the Python 3.14 runtime. To create the Lambda function Open the Functions page of the Lambda console. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen. Choose Create function . Choose Author from scratch Under Basic information , do the following: For Function name , enter s3-trigger-tutorial For Runtime , choose Python 3.14 . For Architecture , choose x86_64 . In the Change default execution role tab, do the following: Expand the tab, then choose Use an existing role . Select the lambda-s3-trigger-role you created earlier. Choose Create function . Deploy the function code This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in. The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the event parameter it receives from Amazon S3. The function then uses the get_object method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object. To deploy the function code Choose the Python tab in the following box and copy the code. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consuming an S3 event with Lambda using TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } In the Code source pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created. In the DEPLOY section, choose Deploy to update your function's code: Create the Amazon S3 trigger To create the Amazon S3 trigger In the Function overview pane, choose Add trigger . Select S3 . Under Bucket , select the bucket you created earlier in the tutorial. Under Event types , be sure that All object create events is selected. Under Recursive invocation , select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. Choose Add . Note When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an event notification on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket. Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created: An error occurred when creating the trigger: Unable to validate the following destination configurations. You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies. Test your Lambda function with a dummy event To test the Lambda function with a dummy event In the Lambda console page for your function, choose the Test tab. For Event name , enter MyTestEvent . In the Event JSON , paste the following test event. Be sure to replace these values: Replace us-east-1 with the region you created your Amazon S3 bucket in. Replace both instances of amzn-s3-demo-bucket with the name of your own Amazon S3 bucket. Replace test%2FKey with the name of the test object you uploaded to your bucket earlier (for example, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choose Save . Choose Test . If your function runs successfully, you’ll see output similar to the following in the Execution results tab. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test the Lambda function with the Amazon S3 trigger To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output. To upload an object to your Amazon S3 bucket Open the Buckets page of the Amazon S3 console and choose the bucket that you created earlier. Choose Upload . Choose Add files and use the file selector to choose an object you want to upload. This object can be any file you choose. Choose Open , then choose Upload . To verify the function invocation using CloudWatch Logs Open the CloudWatch console. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen. Choose Logs , then choose Log groups . Choose the log group for your function ( /aws/lambda/s3-trigger-tutorial ). Under Log streams , choose the most recent log stream. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The CONTENT TYPE you see depends on the type of file you uploaded to your bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions , Delete . Type confirm in the text input field and choose Delete . To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete . Enter the name of the role in the text input field and choose Delete . To delete the S3 bucket Open the Amazon S3 console. Select the bucket you created. Choose Delete . Enter the name of the bucket in the text input field. Choose Delete bucket . Next steps In Tutorial: Using an Amazon S3 trigger to create thumbnail images , the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions S3 Tutorial: Use an Amazon S3 trigger to create thumbnails Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://www.facebook.com/share_channel/?type=reshare&link=https%3A%2F%2Fdev.to%2Fawscommunity-asean%2Fcountermeasure-against-cve-2021-44228-with-aws-waf-4n3d&app_id=966242223397117&source_surface=external_reshare&display&hashtag | Facebook 취향 저격 콘텐츠를 만나는 곳 Facebook에 로그인 이메일 또는 휴대폰 번호 비밀번호 로그인 비밀번호를 잊으셨나요? 새 계정 만들기 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 언어 더 보기... 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 약관 고객 센터 연락처 업로드 및 비사용자 Meta © 2026 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/zh_cn/lambda/latest/dg/with-s3-example.html | 教程:使用 Amazon S3 触发器调用 Lambda 函数 - AWS Lambda 教程:使用 Amazon S3 触发器调用 Lambda 函数 - AWS Lambda 文档 AWS Lambda 开发人员指南 创建 Amazon S3 存储桶 将测试对象上传到存储桶 创建权限策略 创建执行角色 创建 Lambda 函数 部署函数代码 创建 Amazon S3 触发器 测试 Lambda 函数 清除资源 后续步骤 教程:使用 Amazon S3 触发器调用 Lambda 函数 在本教程中,您将使用控制台创建 Lambda 函数,然后为 Amazon Simple Storage Service(Amazon S3)存储桶配置触发器。每次向 Amazon S3 存储桶添加对象时,函数都会运行并将该对象类型输出到 Amazon CloudWatch Logs 中。 本教程演示如何: 创建 Amazon S3 存储桶。 创建一个 Lambda 函数,该函数会在 Amazon S3 存储桶中返回对象的类型。 配置一个 Lambda 触发器,该触发器将在对象上传到存储桶时调用函数。 先后使用虚拟事件和触发器测试函数。 完成这些步骤后,您将了解如何配置 Lambda 函数,使其在向 Amazon S3 存储桶添加或删除对象时运行。您仅可以使用 AWS 管理控制台 完成此教程。 创建 Amazon S3 存储桶 创建 Amazon S3 存储桶 打开 Amazon S3 控制台 并选择 通用存储桶 页面。 选择最接近您地理位置的 AWS 区域。您可以使用屏幕顶部的下拉列表更改区域。在本教程的后面部分,您必须在同个区域中创建 Lambda 函数。 选择 Create bucket (创建存储桶)。 在 General configuration (常规配置)下,执行以下操作: 对于 存储桶类型 ,确保选中 通用型 。 对于 存储桶名称 ,输入符合 Amazon S3 存储桶 命名规则 的全局唯一名称。存储桶名称只能由小写字母、数字、句点(.)和连字符(-)组成。 将所有其他选项设置为默认值并选择 创建存储桶 。 将测试对象上传到存储桶 要上传测试对象 打开 Amazon S3 控制台的 存储桶 页面,选择您在上一步中创建的存储桶。 选择 上传 。 选择 添加文件 ,然后选择要上传的对象。您可以选择任何文件(例如 HappyFace.jpg )。 选择 打开 ,然后选择 上传 。 在本教程的后面部分,您要使用此对象测试 Lambda 函数。 创建权限策略 创建权限策略,允许 Lambda 从 Amazon S3 存储桶获取对象并写入 Amazon CloudWatch Logs。 创建策略 打开 IAM 控制台的 Policies(策略)页面 。 选择 创建策略 。 选择 JSON 选项卡,然后将以下自定义策略粘贴到 JSON 编辑器中。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } 选择 下一步:标签 。 选择 下一步:审核 。 在 Review policy (查看策略) 下,为策略 Name (名称) 输入 s3-trigger-tutorial 。 选择 创建策略 。 创建执行角色 执行角色 是一个 AWS Identity and Access Management(IAM)角色,用于向 Lambda 函数授予访问 AWS 服务 和资源的权限。在此步骤中,您要使用在之前步骤中创建的权限策略来创建执行角色。 创建执行角色并附加自定义权限策略 打开 IAM 控制台的 角色页面 。 选择 创建角色 。 对于可信实体,选择 AWS 服务 ,对于使用案例,选择 Lambda 。 选择 下一步 。 在策略搜索框中,输入 s3-trigger-tutorial 。 在搜索结果中,选择您创建的策略( s3-trigger-tutorial ),然后选择 Next (下一步)。 在 Role details (角色详细信息)下,为 Role name (角色名称)输入 lambda-s3-trigger-role ,然后选择 Create role (创建角色)。 创建 Lambda 函数 使用 Python 3.13 运行时系统在控制台中创建 Lambda 函数。 创建 Lambda 函数 打开 Lamba 控制台的 函数页面 。 确保您在创建 Amazon S3 存储桶所在的同一 AWS 区域 内操作。您可以使用屏幕顶部的下拉列表更改区域。 选择 创建函数 。 选择 从头开始编写 。 在 基本信息 中,执行以下操作: 对于 函数名称 ,输入 s3-trigger-tutorial 。 对于 运行时 ,选择 Python 3.13 。 对于 架构 ,选择 x86_64 。 在 更改默认执行角色 选项卡中,执行以下操作: 展开选项卡,然后选择 使用现有角色 。 选择您之前创建的 lambda-s3-trigger-role 。 选择 创建函数 。 部署函数代码 本教程使用 Python 3.13 运行时系统,但我们还提供了适用于其他运行时系统的示例代码文件。您可以选择以下框中的选项卡,查看适用于您感兴趣的运行时系统的代码。 Lambda 函数检索已上传对象的键名称和来自该对象从 Amazon S3 收到的 event 参数的存储桶名称。然后,该函数使用 适用于 Python (Boto3) 的 AWS SDK 中的 get_object 方法来检索对象的元数据,包括已上传对象的内容类型(MIME 类型)。 要部署函数代码 在下框中选择 Python 选项卡并复制代码。 .NET 适用于 .NET 的 SDK 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 使用 .NET 将 S3 事件与 Lambda 结合使用。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 使用 Go 将 S3 事件与 Lambda 结合使用。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java 适用于 Java 的 SDK 2.x 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 使用 Java 将 S3 事件与 Lambda 结合使用。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript 适用于 JavaScript 的 SDK(v3) 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 使用 JavaScript 将 S3 事件与 Lambda 结合使用。 import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; 使用 TypeScript 将 S3 事件与 Lambda 结合使用。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP 适用于 PHP 的 SDK 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 通过 PHP 将 S3 事件与 Lambda 结合使用。 <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python 适用于 Python 的 SDK (Boto3) 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 使用 Python 将 S3 事件与 Lambda 结合使用。 # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby 适用于 Ruby 的 SDK 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 通过 Ruby 将 S3 事件与 Lambda 结合使用。 require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust 适用于 Rust 的 SDK 注意 查看 GitHub,了解更多信息。在 无服务器示例 存储库中查找完整示例,并了解如何进行设置和运行。 使用 Rust 将 S3 事件与 Lambda 结合使用。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } 在 Lambda 控制台的 代码源 窗格中,将代码粘贴到代码编辑器中,替换 Lambda 创建的代码。 在 部署 部分,选择 部署 以更新函数的代码: 创建 Amazon S3 触发器 创建 Amazon S3 触发器 在 函数概述 窗格中,选择 添加触发器 。 选择 S3 。 在 存储桶 下,选择您在本教程前面步骤中创建的存储桶。 在 事件类型 下,确保已选择 所有对象创建事件 。 在 递归调用 下,选中复选框以确认知晓不建议使用相同的 Amazon S3 存储桶用于输入和输出。 选择 添加 。 注意 当您使用 Lambda 控制台为 Lambda 函数创建 Amazon S3 触发器时,Amazon S3 会在您指定的存储桶上配置 事件通知 。在配置此事件通知之前,Amazon S3 会执行一系列检查以确认事件目标是否存在并具有所需的 IAM 策略。Amazon S3 还会对为该存储桶配置的任何其他事件通知执行这些测试。 由于这项检查,如果存储桶之前为已不再存在的资源或没有所需权限策略的资源配置了事件目标,则 Amazon S3 将无法创建新的事件通知。您将看到以下错误消息,表明无法创建触发器: An error occurred when creating the trigger: Unable to validate the following destination configurations. 如果您之前使用同一存储桶为另一个 Lambda 函数配置了触发器,并且此后又删除了该函数或修改了其权限策略,则会看到此错误。 使用虚拟事件测试 Lambda 函数 要使用虚拟事件测试 Lambda 函数 在函数的 Lambda 控制台页面中,选择 测试 选项卡。 对于 事件名称 ,输入 MyTestEvent 。 在 事件 JSON 中,粘贴以下测试事件。请务必替换以下值: 使用 us-east-1 替换要在其中创建 Amazon S3 存储桶的区域。 将 amzn-s3-demo-bucket 的两个实例都替换为 Amazon S3 存储桶的名称。 将 test%2FKey 替换为您之前上传到存储桶的测试对象的名称(例如, HappyFace.jpg )。 { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } 选择 保存 。 选择 测试 。 如果函数成功运行,您将在 执行结果 选项卡中看到如下输出。 Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 使用 Amazon S3 触发器测试 Lambda 函数 要使用配置的触发器测试函数,请使用控制台将对象上传到 Amazon S3 存储桶。要验证 Lambda 函数是否按预期运行,请使用 CloudWatch Logs 查看函数的输出。 要将对象上传到 Amazon S3 存储桶 打开 Amazon S3 控制台的 存储桶 页面,选择之前创建的存储桶。 选择 上传 。 选择 添加文件 ,然后使用文件选择器选择要上传的对象。此对象可以是您选择的任何文件。 选择 打开 ,然后选择 上传 。 使用 CloudWatch Logs 验证函数调用情况 打开 CloudWatch 控制台 。 确保您在创建 Lambda 函数所在相同的 AWS 区域 操作。您可以使用屏幕顶部的下拉列表更改区域。 选择 日志 ,然后选择 日志组 。 选择函数 ( /aws/lambda/s3-trigger-tutorial ) 的日志组。 在 日志流 下,选择最新的日志流。 如果已正确调用函数来响应 Amazon S3 触发器,您会看到如下输出。您看到的 CONTENT TYPE 取决于上传到存储桶的文件类型。 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg 清除资源 除非您想要保留为本教程创建的资源,否则可立即将其删除。通过删除您不再使用的 AWS 资源,可防止您的 AWS 账户 产生不必要的费用。 删除 Lambda 函数 打开 Lamba 控制台的 Functions(函数)页面 。 选择您创建的函数。 依次选择 操作 和 删除 。 在文本输入字段中键入 confirm ,然后选择 删除 。 删除执行角色 打开 IAM 控制台的 角色页面 。 选择您创建的执行角色。 选择 删除 。 在文本输入字段中输入角色名称,然后选择 Delete (删除)。 删除 S3 存储桶 打开 Amazon S3 控制台 。 选择您创建的存储桶。 选择 删除 。 在文本输入字段中输入存储桶的名称。 选择 删除存储桶 。 后续步骤 在 教程:使用 Amazon S3 触发器创建缩略图 中,Amazon S3 触发器会调用一个函数,该函数会为上传到存储桶的每个图像文件创建缩略图。本教程需要适度的AWS和 Lambda 领域知识水平。其展示了如何使用 AWS Command Line Interface(AWS CLI)来创建资源,以及如何为函数及其依赖项创建 .zip 文件存档部署包。 Javascript 在您的浏览器中被禁用或不可用。 要使用 Amazon Web Services 文档,必须启用 Javascript。请参阅浏览器的帮助页面以了解相关说明。 文档惯例 S3 教程:使用 Amazon S3 触发器创建缩略图 此页面对您有帮助吗?- 是 感谢您对我们工作的肯定! 如果不耽误您的时间,请告诉我们做得好的地方,让我们做得更好。 此页面对您有帮助吗?- 否 感谢您告诉我们本页内容还需要完善。很抱歉让您失望了。 如果不耽误您的时间,请告诉我们如何改进文档。 | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/fr_fr/lambda/latest/dg/with-s3-example.html | Didacticiel : utilisation d’un déclencheur Amazon S3 pour invoquer une fonction Lambda - AWS Lambda Didacticiel : utilisation d’un déclencheur Amazon S3 pour invoquer une fonction Lambda - AWS Lambda Documentation AWS Lambda Guide du développeur Créer un compartiment Amazon S3 Charger un objet de test dans votre compartiment Création d’une stratégie d’autorisations Créer un rôle d’exécution Créer la fonction Lambda Déployer le code de la fonction Création d’un déclencheur Amazon S3 Test de la fonction Lambda Nettoyage de vos ressources Étapes suivantes Les traductions sont fournies par des outils de traduction automatique. En cas de conflit entre le contenu d'une traduction et celui de la version originale en anglais, la version anglaise prévaudra. Didacticiel : utilisation d’un déclencheur Amazon S3 pour invoquer une fonction Lambda Dans ce didacticiel, vous allez utiliser la console pour créer une fonction Lambda et configurer un déclencheur pour un compartiment Amazon Simple Storage Service (Amazon S3). Chaque fois que vous ajoutez un objet à votre compartiment Amazon S3, votre fonction s'exécute et affiche le type d'objet dans Amazon CloudWatch Logs. Ce tutoriel montre comment : Créez un compartiment Amazon S3. Créez une fonction Lambda qui renvoie le type d’objet des objets dans un compartiment Amazon S3. Configurez un déclencheur Lambda qui invoque votre fonction lorsque des objets sont chargés dans votre compartiment. Testez votre fonction, d’abord avec un événement fictif, puis en utilisant le déclencheur. En suivant ces étapes, vous apprendrez à configurer une fonction Lambda pour qu’elle s’exécute chaque fois que des objets sont ajoutés ou supprimés d’un compartiment Amazon S3. Vous pouvez compléter ce didacticiel en n’utilisant que la AWS Management Console. Créer un compartiment Amazon S3 Pour créer un compartiment Amazon S3 Ouvrez la console Amazon S3 et sélectionnez la page Compartiments à usage général . Sélectionnez le Région AWS plus proche de votre situation géographique. Vous pouvez modifier votre région à l’aide de la liste déroulante en haut de l’écran. Plus loin dans le didacticiel, vous devez créer votre fonction Lambda dans la même région. Choisissez Create bucket (Créer un compartiment). Sous Configuration générale , procédez comme suit : Pour le type de compartiment , veillez à sélectionner Usage général . Pour le nom du compartiment , saisissez un nom unique au monde qui respecte les règles de dénomination du compartiment Amazon S3. Les noms de compartiment peuvent contenir uniquement des lettres minuscules, des chiffres, de points (.) et des traits d’union (-). Conservez les valeurs par défaut de toutes les autres options et choisissez Créer un compartiment . Charger un objet de test dans votre compartiment Pour charger un objet de test Ouvrez la page Compartiments de la console Amazon S3 et choisissez le compartiment que vous avez créé à l’étape précédente. Choisissez Charger . Choisissez Ajouter des fichiers et sélectionnez l’objet que vous souhaitez charger. Vous pouvez sélectionner n’importe quel fichier (par exemple, HappyFace.jpg ). Choisissez Ouvrir , puis Charger . Plus loin dans le tutoriel, vous testerez votre fonction Lambda à l’aide de cet objet. Création d’une stratégie d’autorisations Créez une politique d'autorisation qui permet à Lambda d'obtenir des objets depuis un compartiment Amazon S3 et d'écrire dans Amazon CloudWatch Logs. Pour créer la politique Ouvrez la page stratégies de la console IAM. Choisissez Créer une stratégie . Choisissez l’onglet JSON , puis collez la stratégie personnalisée suivante dans l’éditeur JSON. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choisissez Suivant : Balises . Choisissez Suivant : Vérification . Sous Examiner une stratégie , pour le Nom de la stratégie, saisissez s3-trigger-tutorial . Choisissez Créer une stratégie . Créer un rôle d’exécution Un rôle d'exécution est un rôle Gestion des identités et des accès AWS (IAM) qui accorde à une fonction Lambda l'autorisation d' Services AWS accès et de ressources. Dans cette étape, vous créez un rôle d’exécution à l’aide de la politique d’autorisations que vous avez créée à l’étape précédente. Pour créer un rôle d’exécution et attacher votre politique d’autorisations personnalisée Ouvrez la page Rôles de la console IAM. Sélectionnez Créer un rôle . Pour le type d’entité de confiance, choisissez Service AWS , puis pour le cas d’utilisation, choisissez Lambda . Choisissez Suivant . Dans la zone de recherche de stratégie, entrez s3-trigger-tutorial . Dans les résultats de la recherche, sélectionnez la stratégie que vous avez créée ( s3-trigger-tutorial ), puis choisissez Suivant . Sous Role details (Détails du rôle), pour Role name (Nom du rôle), saisissez lambda-s3-trigger-role , puis sélectionnez Create role (Créer un rôle). Créer la fonction Lambda Créez une fonction Lambda dans la console à l'aide de l'environnement d'exécution Python 3.14. Pour créer la fonction Lambda Ouvrez la page Functions (Fonctions) de la console Lambda. Assurez-vous de travailler dans le même environnement que celui dans Région AWS lequel vous avez créé votre compartiment Amazon S3. Vous pouvez modifier votre région à l’aide de la liste déroulante en haut de l’écran. Choisissez Créer une fonction . Choisissez Créer à partir de zéro . Sous Basic information (Informations de base), procédez comme suit : Sous Nom de la fonction , saisissez s3-trigger-tutorial . Pour Runtime , choisissez Python 3.14 . Pour Architecture , choisissez x86_64 . Dans l’onglet Modifier le rôle d’exécution par défaut , procédez comme suit : Ouvrez l’onglet, puis choisissez Utiliser un rôle existant . Sélectionnez le lambda-s3-trigger-role que vous avez créé précédemment. Choisissez Créer une fonction . Déployer le code de la fonction Ce didacticiel utilise le moteur d'exécution Python 3.14, mais nous avons également fourni des exemples de fichiers de code pour d'autres environnements d'exécution. Vous pouvez sélectionner l’onglet dans la zone suivante pour voir le code d’exécution qui vous intéresse. La fonction Lambda récupère le nom de la clé de l’objet chargé et le nom du compartiment à partir du paramètre event qu’elle reçoit d’Amazon S3. La fonction utilise ensuite la méthode get_object du AWS SDK pour Python (Boto3) pour récupérer les métadonnées de l'objet, y compris le type de contenu (type MIME) de l'objet chargé. Pour déployer le code de la fonction Choisissez l’onglet Python dans la zone suivante et copiez le code. .NET SDK pour .NET Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Utilisation d’un événement S3 avec Lambda en utilisant .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go Kit SDK pour Go V2 Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Utilisation d’un événement S3 avec Lambda en utilisant Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK pour Java 2.x Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Utilisation d’un événement S3 avec Lambda en utilisant Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK pour JavaScript (v3) Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Consommation d'un événement S3 avec Lambda en utilisant. JavaScript import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consommation d'un événement S3 avec Lambda en utilisant. TypeScript // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP Kit SDK pour PHP Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Consommation d’un événement S3 avec Lambda à l’aide de PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python Kit SDK for Python (Boto3) Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Utilisation d’un événement S3 avec Lambda en utilisant Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby Kit SDK pour Ruby Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Consommation d’un événement S3 avec Lambda à l’aide de Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK pour Rust Note Il y en a plus sur GitHub. Trouvez l’exemple complet et découvrez comment le configurer et l’exécuter dans le référentiel d’ exemples sans serveur . Utilisation d’un événement S3 avec Lambda en utilisant Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } Dans le volet Code source de la console Lambda, collez le code dans l’éditeur de code, en remplaçant le code créé par Lambda. Dans la section DÉPLOYER , choisissez Déployer pour mettre à jour le code de votre fonction : Création d’un déclencheur Amazon S3 Pour créer le déclencheur Amazon S3 Dans le volet de Présentation de la fonction , choisissez Ajouter un déclencheur . Sélectionnez S3 . Sous Compartiment , sélectionnez le compartiment que vous avez créé précédemment dans le didacticiel. Sous Types d’événements , assurez-vous que Tous les événements de création d’objet est sélectionné. Sous Invocation récursive , cochez la case pour confirmer qu’il n’est pas recommandé d’utiliser le même compartiment Amazon S3 pour les entrées et les sorties. Choisissez Ajouter . Note Lorsque vous créez un déclencheur Amazon S3 pour une fonction Lambda à l’aide de la console Lambda, Amazon S3 configure une notification d’événement sur le compartiment que vous spécifiez. Avant de configurer cette notification d’événement, Amazon S3 effectue une série de vérifications pour confirmer que la destination de l’événement existe et dispose des politiques IAM requises. Amazon S3 effectue également ces tests sur toutes les autres notifications d’événements configurées pour ce compartiment. En raison de cette vérification, si le compartiment a déjà configuré des destinations d’événements pour des ressources qui n’existent plus ou pour des ressources qui ne disposent pas des politiques d’autorisations requises, Amazon S3 ne sera pas en mesure de créer la nouvelle notification d’événement. Vous verrez le message d’erreur suivant indiquant que votre déclencheur n’a pas pu être créé : An error occurred when creating the trigger: Unable to validate the following destination configurations. Vous pouvez voir cette erreur si vous avez précédemment configuré un déclencheur pour une autre fonction Lambda utilisant le même compartiment, et si vous avez depuis supprimé la fonction ou modifié ses politiques d’autorisations. Test de votre fonction Lambda à l’aide d’un événement fictif Pour tester la fonction Lambda à l’aide d’un événement fictif Dans la page de votre fonction de la console Lambda, choisissez l’onglet Tester . Dans Event name (Nom de l’événement), saisissez MyTestEvent . Dans le JSON d’événement , collez l’événement de test suivant. Veillez à remplacer les valeurs suivantes : Remplacez us-east-1 par la région dans laquelle vous avez créé votre compartiment Amazon S3. Remplacez les deux instances de amzn-s3-demo-bucket par le nom de votre propre compartiment Amazon S3. Remplacez test%2FKey par le nom de l’objet de test que vous avez chargé précédemment dans votre compartiment (par exemple, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choisissez Enregistrer . Sélectionnez Tester) . Si votre fonction s’exécute correctement, vous obtiendrez un résultat similaire à celui qui suit dans l’onglet Résultats de l’exécution . Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test de la fonction Lambda avec le déclencheur Amazon S3 Pour tester votre fonction avec le déclencheur configuré, chargez un objet dans votre compartiment Amazon S3 à l’aide de la console. Pour vérifier que votre fonction Lambda s'est exécutée comme prévu, utilisez CloudWatch Logs pour afficher le résultat de votre fonction. Pour charger un objet dans votre compartiment Amazon S3 Ouvrez la page Compartiments de la console Amazon S3 et choisissez le compartiment que vous avez créé précédemment. Choisissez Charger . Choisissez Ajouter des fichiers et utilisez le sélecteur de fichiers pour choisir l’objet que vous souhaitez charger. Cet objet peut être n’importe quel fichier que vous choisissez. Choisissez Ouvrir , puis Charger . Pour vérifier l'invocation de la fonction à l'aide CloudWatch de Logs Ouvrez la console CloudWatch . Assurez-vous de travailler de la même manière que celle dans laquelle Région AWS vous avez créé votre fonction Lambda. Vous pouvez modifier votre région à l’aide de la liste déroulante en haut de l’écran. Choisissez Journaux , puis Groupes de journaux . Choisissez le groupe de journaux de votre fonction ( /aws/lambda/s3-trigger-tutorial ). Sous Flux de journaux , sélectionnez le flux de journaux le plus récent. Si votre fonction a été invoquée correctement en réponse à votre déclencheur Amazon S3, vous obtiendrez une sortie similaire à celle qui suit. Le CONTENT TYPE que vous voyez dépend du type de fichier que vous avez chargé dans votre compartiment. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Nettoyage de vos ressources Vous pouvez maintenant supprimer les ressources que vous avez créées pour ce didacticiel, sauf si vous souhaitez les conserver. En supprimant AWS les ressources que vous n'utilisez plus, vous évitez des frais inutiles pour votre Compte AWS. Pour supprimer la fonction Lambda Ouvrez la page Functions (Fonctions) de la console Lambda. Sélectionnez la fonction que vous avez créée. Sélectionnez Actions , Supprimer . Saisissez confirm dans la zone de saisie de texte et choisissez Delete (Supprimer). Pour supprimer le rôle d’exécution Ouvrez la page Roles (Rôles) de la console IAM. Sélectionnez le rôle d’exécution que vous avez créé. Sélectionnez Delete (Supprimer) . Saisissez le nom du rôle dans le champ de saisie de texte et choisissez Delete (Supprimer). Pour supprimer le compartiment S3 Ouvrez la console Amazon S3 . Sélectionnez le compartiment que vous avez créé. Sélectionnez Delete (Supprimer) . Saisissez le nom du compartiment dans le champ de saisie de texte. Choisissez Supprimer le compartiment . Étapes suivantes Dans Didacticiel : Utilisation d’un déclencheur Amazon S3 pour créer des images miniatures , le déclencheur Amazon S3 invoque une fonction qui crée une image de miniature pour chaque fichier image qui est chargé dans votre compartiment. Ce didacticiel nécessite un niveau modéré de connaissance du AWS domaine Lambda. Il montre comment créer des ressources à l'aide de AWS Command Line Interface (AWS CLI) et comment créer un package de déploiement d'archives de fichiers .zip pour la fonction et ses dépendances. JavaScript est désactivé ou n'est pas disponible dans votre navigateur. Pour que vous puissiez utiliser la documentation AWS, Javascript doit être activé. Vous trouverez des instructions sur les pages d'aide de votre navigateur. Conventions de rédaction S3 Didacticiel : Utilisation d’un déclencheur Amazon S3 pour créer des images miniatures Cette page vous a-t-elle été utile ? - Oui Merci de nous avoir fait part de votre satisfaction. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer ce qui vous a plu afin que nous puissions nous améliorer davantage. Cette page vous a-t-elle été utile ? - Non Merci de nous avoir avertis que cette page avait besoin d'être retravaillée. Nous sommes désolés de ne pas avoir répondu à vos attentes. Si vous avez quelques minutes à nous consacrer, merci de nous indiquer comment nous pourrions améliorer cette documentation. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/setting-up-cloudfront.html | Set up your AWS account - Amazon CloudFront Set up your AWS account - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Sign up for an AWS account Create a user with administrative access Choose how to access CloudFront Set up your AWS account This topic describes preliminary steps, such as creating an AWS account, to prepare you to use Amazon CloudFront. Topics Sign up for an AWS account Create a user with administrative access Choose how to access CloudFront Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account Open https://portal.aws.amazon.com/billing/signup . Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access . AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account . Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide . Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide . Create a user with administrative access Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide . In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide . Sign in as the user with administrative access To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide . Assign access to additional users In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide . Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide . Choose how to access CloudFront You can access Amazon CloudFront in the following ways: AWS Management Console – The procedures throughout this guide explain how to use the AWS Management Console to perform tasks. AWS SDKs – If you're using a programming language that AWS provides an SDK for, you can use an SDK to access CloudFront. SDKs simplify authentication, integrate easily with your development environment, and provide access to CloudFront commands. For more information, see Using CloudFront with an AWS SDK . CloudFront API – If you're using a programming language that an SDK isn't available for, see the Amazon CloudFront API Reference for information about API actions and about how to make API requests. AWS CLI – The AWS Command Line Interface (AWS CLI) is a unified tool for managing AWS services. For information about how to install and configure the AWS CLI, see Install or update to the latest version of the AWS CLI in the AWS Command Line Interface User Guide . Tools for Windows PowerShell – If you have experience with Windows PowerShell, you might prefer to use AWS Tools for Windows PowerShell. For more information, see Installing the AWS Tools for Windows PowerShell in the AWS Tools for PowerShell User Guide . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Get started Get started with a standard distribution Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#with-s3-tutorial-test-image | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/setting-up-cloudfront.html | Set up your AWS account - Amazon CloudFront Set up your AWS account - Amazon CloudFront Documentation Amazon CloudFront Developer Guide Sign up for an AWS account Create a user with administrative access Choose how to access CloudFront Set up your AWS account This topic describes preliminary steps, such as creating an AWS account, to prepare you to use Amazon CloudFront. Topics Sign up for an AWS account Create a user with administrative access Choose how to access CloudFront Sign up for an AWS account If you do not have an AWS account, complete the following steps to create one. To sign up for an AWS account Open https://portal.aws.amazon.com/billing/signup . Follow the online instructions. Part of the sign-up procedure involves receiving a phone call or text message and entering a verification code on the phone keypad. When you sign up for an AWS account, an AWS account root user is created. The root user has access to all AWS services and resources in the account. As a security best practice, assign administrative access to a user, and use only the root user to perform tasks that require root user access . AWS sends you a confirmation email after the sign-up process is complete. At any time, you can view your current account activity and manage your account by going to https://aws.amazon.com/ and choosing My Account . Create a user with administrative access After you sign up for an AWS account, secure your AWS account root user, enable AWS IAM Identity Center, and create an administrative user so that you don't use the root user for everyday tasks. Secure your AWS account root user Sign in to the AWS Management Console as the account owner by choosing Root user and entering your AWS account email address. On the next page, enter your password. For help signing in by using root user, see Signing in as the root user in the AWS Sign-In User Guide . Turn on multi-factor authentication (MFA) for your root user. For instructions, see Enable a virtual MFA device for your AWS account root user (console) in the IAM User Guide . Create a user with administrative access Enable IAM Identity Center. For instructions, see Enabling AWS IAM Identity Center in the AWS IAM Identity Center User Guide . In IAM Identity Center, grant administrative access to a user. For a tutorial about using the IAM Identity Center directory as your identity source, see Configure user access with the default IAM Identity Center directory in the AWS IAM Identity Center User Guide . Sign in as the user with administrative access To sign in with your IAM Identity Center user, use the sign-in URL that was sent to your email address when you created the IAM Identity Center user. For help signing in using an IAM Identity Center user, see Signing in to the AWS access portal in the AWS Sign-In User Guide . Assign access to additional users In IAM Identity Center, create a permission set that follows the best practice of applying least-privilege permissions. For instructions, see Create a permission set in the AWS IAM Identity Center User Guide . Assign users to a group, and then assign single sign-on access to the group. For instructions, see Add groups in the AWS IAM Identity Center User Guide . Choose how to access CloudFront You can access Amazon CloudFront in the following ways: AWS Management Console – The procedures throughout this guide explain how to use the AWS Management Console to perform tasks. AWS SDKs – If you're using a programming language that AWS provides an SDK for, you can use an SDK to access CloudFront. SDKs simplify authentication, integrate easily with your development environment, and provide access to CloudFront commands. For more information, see Using CloudFront with an AWS SDK . CloudFront API – If you're using a programming language that an SDK isn't available for, see the Amazon CloudFront API Reference for information about API actions and about how to make API requests. AWS CLI – The AWS Command Line Interface (AWS CLI) is a unified tool for managing AWS services. For information about how to install and configure the AWS CLI, see Install or update to the latest version of the AWS CLI in the AWS Command Line Interface User Guide . Tools for Windows PowerShell – If you have experience with Windows PowerShell, you might prefer to use AWS Tools for Windows PowerShell. For more information, see Installing the AWS Tools for Windows PowerShell in the AWS Tools for PowerShell User Guide . Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions Get started Get started with a standard distribution Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://www.youtube.com/watch?v=Dr3CcSPRunA | SuperHappyDevHouse 34 Interviews, Part Two - YouTube 정보 보도자료 저작권 문의하기 크리에이터 광고 개발자 약관 개인정보처리방침 정책 및 안전 YouTube 작동의 원리 새로운 기능 테스트하기 © 2026 Google LLC, Sundar Pichai, 1600 Amphitheatre Parkway, Mountain View CA 94043, USA, 0807-882-594 (무료), yt-support-solutions-kr@google.com, 호스팅: Google LLC, 사업자정보 , 불법촬영물 신고 크리에이터들이 유튜브 상에 게시, 태그 또는 추천한 상품들은 판매자들의 약관에 따라 판매됩니다. 유튜브는 이러한 제품들을 판매하지 않으며, 그에 대한 책임을 지지 않습니다. var ytInitialData = {"responseContext":{"serviceTrackingParams":[{"service":"CSI","params":[{"key":"c","value":"WEB"},{"key":"cver","value":"2.20260109.01.00"},{"key":"yt_li","value":"0"},{"key":"GetWatchNext_rid","value":"0x6d7d82bbe8fe1d54"}]},{"service":"GFEEDBACK","params":[{"key":"logged_in","value":"0"},{"key":"visitor_data","value":"CgtOVTdfRmtIZGJLMCi5oZjLBjIKCgJLUhIEGgAgOg%3D%3D"}]},{"service":"GUIDED_HELP","params":[{"key":"logged_in","value":"0"}]},{"service":"ECATCHER","params":[{"key":"client.version","value":"2.20260109"},{"key":"client.name","value":"WEB"}]}],"mainAppWebResponseContext":{"loggedOut":true,"trackingParam":"kx_fmPxhoPZRWWIYfHRFTXExoTpRIIfWsJzAIpjFQonO0gHRgkussh7BwOcCE59TDtslLKPQ-SS"},"webResponseContextExtensionData":{"webResponseContextPreloadData":{"preloadMessageNames":["twoColumnWatchNextResults","results","videoPrimaryInfoRenderer","videoViewCountRenderer","menuRenderer","menuServiceItemRenderer","segmentedLikeDislikeButtonViewModel","likeButtonViewModel","toggleButtonViewModel","buttonViewModel","modalWithTitleAndButtonRenderer","buttonRenderer","dislikeButtonViewModel","unifiedSharePanelRenderer","menuFlexibleItemRenderer","videoSecondaryInfoRenderer","videoOwnerRenderer","subscribeButtonRenderer","subscriptionNotificationToggleButtonRenderer","menuPopupRenderer","confirmDialogRenderer","metadataRowContainerRenderer","metadataRowRenderer","compositeVideoPrimaryInfoRenderer","itemSectionRenderer","continuationItemRenderer","secondaryResults","lockupViewModel","thumbnailViewModel","thumbnailOverlayBadgeViewModel","thumbnailBadgeViewModel","thumbnailHoverOverlayToggleActionsViewModel","lockupMetadataViewModel","decoratedAvatarViewModel","avatarViewModel","contentMetadataViewModel","sheetViewModel","listViewModel","listItemViewModel","badgeViewModel","autoplay","playerOverlayRenderer","menuNavigationItemRenderer","watchNextEndScreenRenderer","endScreenVideoRenderer","thumbnailOverlayTimeStatusRenderer","thumbnailOverlayNowPlayingRenderer","playerOverlayAutoplayRenderer","playerOverlayVideoDetailsRenderer","autoplaySwitchButtonRenderer","quickActionsViewModel","decoratedPlayerBarRenderer","speedmasterEduViewModel","engagementPanelSectionListRenderer","engagementPanelTitleHeaderRenderer","sortFilterSubMenuRenderer","sectionListRenderer","adsEngagementPanelContentRenderer","chipBarViewModel","chipViewModel","structuredDescriptionContentRenderer","videoDescriptionHeaderRenderer","factoidRenderer","viewCountFactoidRenderer","expandableVideoDescriptionBodyRenderer","videoDescriptionTranscriptSectionRenderer","videoDescriptionInfocardsSectionRenderer","desktopTopbarRenderer","topbarLogoRenderer","fusionSearchboxRenderer","topbarMenuButtonRenderer","multiPageMenuRenderer","hotkeyDialogRenderer","hotkeyDialogSectionRenderer","hotkeyDialogSectionOptionRenderer","voiceSearchDialogRenderer","cinematicContainerRenderer"]},"ytConfigData":{"visitorData":"CgtOVTdfRmtIZGJLMCi5oZjLBjIKCgJLUhIEGgAgOg%3D%3D","rootVisualElementType":3832},"webPrefetchData":{"navigationEndpoints":[{"clickTrackingParams":"CAAQg2ciEwj3wsylmoiSAxX8U3gAHQ4qD-8yDHJlbGF0ZWQtYXV0b0jw9Maeks7w3g6aAQUIAxD4HcoBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Oa0ZHfcalCM\u0026pp=QAFIAQ%3D%3D","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Oa0ZHfcalCM","params":"EAEYAdoBBAgBKgA%3D","playerParams":"QAFIAQ%3D%3D","watchEndpointSupportedPrefetchConfig":{"prefetchHintConfig":{"prefetchPriority":0,"countdownUiRelativeSecondsPrefetchCondition":-3}}}},{"clickTrackingParams":"CAAQg2ciEwj3wsylmoiSAxX8U3gAHQ4qD-8yDHJlbGF0ZWQtYXV0b0jw9Maeks7w3g6aAQUIAxD4HcoBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Oa0ZHfcalCM\u0026pp=QAFIAQ%3D%3D","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Oa0ZHfcalCM","params":"EAEYAdoBBAgBKgA%3D","playerParams":"QAFIAQ%3D%3D","watchEndpointSupportedPrefetchConfig":{"prefetchHintConfig":{"prefetchPriority":0,"countdownUiRelativeSecondsPrefetchCondition":-3}}}},{"clickTrackingParams":"CAAQg2ciEwj3wsylmoiSAxX8U3gAHQ4qD-8yDHJlbGF0ZWQtYXV0b0jw9Maeks7w3g6aAQUIAxD4HcoBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Oa0ZHfcalCM\u0026pp=QAFIAQ%3D%3D","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Oa0ZHfcalCM","params":"EAEYAdoBBAgBKgA%3D","playerParams":"QAFIAQ%3D%3D","watchEndpointSupportedPrefetchConfig":{"prefetchHintConfig":{"prefetchPriority":0,"countdownUiRelativeSecondsPrefetchCondition":-3}}}}]},"hasDecorated":true}},"contents":{"twoColumnWatchNextResults":{"results":{"results":{"contents":[{"videoPrimaryInfoRenderer":{"title":{"runs":[{"text":"SuperHappyDevHouse 34 Interviews, Part Two"}]},"viewCount":{"videoViewCountRenderer":{"viewCount":{"simpleText":"조회수 651회"},"shortViewCount":{"simpleText":"조회수 651회"},"originalViewCount":"0"}},"videoActions":{"menuRenderer":{"items":[{"menuServiceItemRenderer":{"text":{"runs":[{"text":"신고"}]},"icon":{"iconType":"FLAG"},"serviceEndpoint":{"clickTrackingParams":"CI8CEMyrARgAIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","showEngagementPanelEndpoint":{"identifier":{"tag":"PAabuse_report"},"globalConfiguration":{"params":"qgdxCAESC0RyM0NjU1BSdW5BGmBFZ3RFY2pORFkxTlFVblZ1UVVBQldBQjRCWklCTWdvd0VpNW9kSFJ3Y3pvdkwya3VlWFJwYldjdVkyOXRMM1pwTDBSeU0wTmpVMUJTZFc1QkwyUmxabUYxYkhRdWFuQm4%3D"},"engagementPanelPresentationConfigs":{"engagementPanelPopupPresentationConfig":{"popupType":"PANEL_POPUP_TYPE_DIALOG"}}}},"trackingParams":"CI8CEMyrARgAIhMI98LMpZqIkgMV_FN4AB0OKg_v"}}],"trackingParams":"CI8CEMyrARgAIhMI98LMpZqIkgMV_FN4AB0OKg_v","topLevelButtons":[{"segmentedLikeDislikeButtonViewModel":{"likeButtonViewModel":{"likeButtonViewModel":{"toggleButtonViewModel":{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"LIKE","title":"4","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CJoCEKVBIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},{"innertubeCommand":{"clickTrackingParams":"CJoCEKVBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"simpleText":"동영상이 마음에 드시나요?"},"content":{"simpleText":"로그인하여 의견을 알려주세요."},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CJsCEPqGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko\u0026hl=ko\u0026ec=66426","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CJsCEPqGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/like"}},"likeEndpoint":{"status":"LIKE","target":{"videoId":"Dr3CcSPRunA"},"likeParams":"Cg0KC0RyM0NjU1BSdW5BIAAyCwi6oZjLBhDuyaRO"}},"idamTag":"66426"}},"trackingParams":"CJsCEPqGBCITCPfCzKWaiJIDFfxTeAAdDioP7w=="}}}}}}}]}},"accessibilityText":"다른 사용자 4명과 함께 이 동영상에 좋아요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CJoCEKVBIhMI98LMpZqIkgMV_FN4AB0OKg_v","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.like.button","tooltip":"이 동영상이 마음에 듭니다."}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"LIKE","title":"5","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CJkCEKVBIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},{"innertubeCommand":{"clickTrackingParams":"CJkCEKVBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/removelike"}},"likeEndpoint":{"status":"INDIFFERENT","target":{"videoId":"Dr3CcSPRunA"},"removeLikeParams":"Cg0KC0RyM0NjU1BSdW5BGAAqCwi6oZjLBhD9w6VO"}}}]}},"accessibilityText":"다른 사용자 4명과 함께 이 동영상에 좋아요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CJkCEKVBIhMI98LMpZqIkgMV_FN4AB0OKg_v","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.like.button","tooltip":"좋아요 취소"}},"identifier":"watch-like","trackingParams":"CI8CEMyrARgAIhMI98LMpZqIkgMV_FN4AB0OKg_v","isTogglingDisabled":true}},"likeStatusEntityKey":"EgtEcjNDY1NQUnVuQSA-KAE%3D","likeStatusEntity":{"key":"EgtEcjNDY1NQUnVuQSA-KAE%3D","likeStatus":"INDIFFERENT"}}},"dislikeButtonViewModel":{"dislikeButtonViewModel":{"toggleButtonViewModel":{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"DISLIKE","title":"싫어요","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CJcCEKiPCSITCPfCzKWaiJIDFfxTeAAdDioP7w=="}},{"innertubeCommand":{"clickTrackingParams":"CJcCEKiPCSITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"simpleText":"동영상이 마음에 안 드시나요?"},"content":{"simpleText":"로그인하여 의견을 알려주세요."},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CJgCEPmGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko\u0026hl=ko\u0026ec=66425","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CJgCEPmGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/dislike"}},"likeEndpoint":{"status":"DISLIKE","target":{"videoId":"Dr3CcSPRunA"},"dislikeParams":"Cg0KC0RyM0NjU1BSdW5BEAAiCwi6oZjLBhD9ladO"}},"idamTag":"66425"}},"trackingParams":"CJgCEPmGBCITCPfCzKWaiJIDFfxTeAAdDioP7w=="}}}}}}}]}},"accessibilityText":"동영상에 싫어요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CJcCEKiPCSITCPfCzKWaiJIDFfxTeAAdDioP7w==","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.dislike.button","tooltip":"이 동영상이 마음에 들지 않습니다."}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"DISLIKE","title":"싫어요","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CJYCEKiPCSITCPfCzKWaiJIDFfxTeAAdDioP7w=="}},{"innertubeCommand":{"clickTrackingParams":"CJYCEKiPCSITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/like/removelike"}},"likeEndpoint":{"status":"INDIFFERENT","target":{"videoId":"Dr3CcSPRunA"},"removeLikeParams":"Cg0KC0RyM0NjU1BSdW5BGAAqCwi6oZjLBhCytqdO"}}}]}},"accessibilityText":"동영상에 싫어요 표시","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CJYCEKiPCSITCPfCzKWaiJIDFfxTeAAdDioP7w==","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","accessibilityId":"id.video.dislike.button","tooltip":"이 동영상이 마음에 들지 않습니다."}},"trackingParams":"CI8CEMyrARgAIhMI98LMpZqIkgMV_FN4AB0OKg_v","isTogglingDisabled":true}},"dislikeEntityKey":"EgtEcjNDY1NQUnVuQSA-KAE%3D"}},"iconType":"LIKE_ICON_TYPE_UNKNOWN","likeCountEntity":{"key":"unset_like_count_entity_key"},"dynamicLikeCountUpdateData":{"updateStatusKey":"like_count_update_status_key","placeholderLikeCountValuesKey":"like_count_placeholder_values_key","updateDelayLoopId":"like_count_update_delay_loop_id","updateDelaySec":5},"teasersOrderEntityKey":"EgtEcjNDY1NQUnVuQSD8AygB"}},{"buttonViewModel":{"iconName":"SHARE","title":"공유","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CJQCEOWWARgCIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},{"innertubeCommand":{"clickTrackingParams":"CJQCEOWWARgCIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/share/get_share_panel"}},"shareEntityServiceEndpoint":{"serializedShareEntity":"CgtEcjNDY1NQUnVuQaABAQ%3D%3D","commands":[{"clickTrackingParams":"CJQCEOWWARgCIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","openPopupAction":{"popup":{"unifiedSharePanelRenderer":{"trackingParams":"CJUCEI5iIhMI98LMpZqIkgMV_FN4AB0OKg_v","showLoadingSpinner":true}},"popupType":"DIALOG","beReused":true}}]}}}]}},"accessibilityText":"공유","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CJQCEOWWARgCIhMI98LMpZqIkgMV_FN4AB0OKg_v","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE","accessibilityId":"id.video.share.button","tooltip":"공유"}}],"accessibility":{"accessibilityData":{"label":"추가 작업"}},"flexibleItems":[{"menuFlexibleItemRenderer":{"menuItem":{"menuServiceItemRenderer":{"text":{"runs":[{"text":"저장"}]},"icon":{"iconType":"PLAYLIST_ADD"},"serviceEndpoint":{"clickTrackingParams":"CJICEOuQCSITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"runs":[{"text":"나중에 다시 보고 싶으신가요?"}]},"content":{"runs":[{"text":"로그인하여 동영상을 재생목록에 추가하세요."}]},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CJMCEPuGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko%26next%3D%252Fwatch%253Fv%253DDr3CcSPRunA\u0026hl=ko\u0026ec=66427","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CJMCEPuGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr3---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=0ebdc27123d1ba70\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc="}}}}},"idamTag":"66427"}},"trackingParams":"CJMCEPuGBCITCPfCzKWaiJIDFfxTeAAdDioP7w=="}}}}}},"trackingParams":"CJICEOuQCSITCPfCzKWaiJIDFfxTeAAdDioP7w=="}},"topLevelButton":{"buttonViewModel":{"iconName":"PLAYLIST_ADD","title":"저장","onTap":{"serialCommand":{"commands":[{"logGestureCommand":{"gestureType":"GESTURE_EVENT_TYPE_LOG_GENERIC_CLICK","trackingParams":"CJACEOuQCSITCPfCzKWaiJIDFfxTeAAdDioP7w=="}},{"innertubeCommand":{"clickTrackingParams":"CJACEOuQCSITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"runs":[{"text":"나중에 다시 보고 싶으신가요?"}]},"content":{"runs":[{"text":"로그인하여 동영상을 재생목록에 추가하세요."}]},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CJECEPuGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko%26next%3D%252Fwatch%253Fv%253DDr3CcSPRunA\u0026hl=ko\u0026ec=66427","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CJECEPuGBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr3---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=0ebdc27123d1ba70\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc="}}}}},"idamTag":"66427"}},"trackingParams":"CJECEPuGBCITCPfCzKWaiJIDFfxTeAAdDioP7w=="}}}}}}}]}},"accessibilityText":"재생목록에 저장","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CJACEOuQCSITCPfCzKWaiJIDFfxTeAAdDioP7w==","isFullWidth":false,"type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","tooltip":"저장"}}}}]}},"trackingParams":"CI8CEMyrARgAIhMI98LMpZqIkgMV_FN4AB0OKg_v","dateText":{"simpleText":"2009. 8. 23."},"relativeDateText":{"accessibility":{"accessibilityData":{"label":"16년 전"}},"simpleText":"16년 전"}}},{"videoSecondaryInfoRenderer":{"owner":{"videoOwnerRenderer":{"thumbnail":{"thumbnails":[{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s48-c-k-c0x00ffffff-no-rj","width":48,"height":48},{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s88-c-k-c0x00ffffff-no-rj","width":88,"height":88},{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s176-c-k-c0x00ffffff-no-rj","width":176,"height":176}]},"title":{"runs":[{"text":"Dave Briccetti","navigationEndpoint":{"clickTrackingParams":"CI4CEOE5IhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/@DaveBriccetti","webPageType":"WEB_PAGE_TYPE_CHANNEL","rootVe":3611,"apiUrl":"/youtubei/v1/browse"}},"browseEndpoint":{"browseId":"UCsvS1__wPMXEPbtFzgpX3nQ","canonicalBaseUrl":"/@DaveBriccetti"}}}]},"subscriptionButton":{"type":"FREE"},"navigationEndpoint":{"clickTrackingParams":"CI4CEOE5IhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/@DaveBriccetti","webPageType":"WEB_PAGE_TYPE_CHANNEL","rootVe":3611,"apiUrl":"/youtubei/v1/browse"}},"browseEndpoint":{"browseId":"UCsvS1__wPMXEPbtFzgpX3nQ","canonicalBaseUrl":"/@DaveBriccetti"}},"subscriberCountText":{"accessibility":{"accessibilityData":{"label":"구독자 2.51천명"}},"simpleText":"구독자 2.51천명"},"trackingParams":"CI4CEOE5IhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"subscribeButton":{"subscribeButtonRenderer":{"buttonText":{"runs":[{"text":"구독"}]},"subscribed":false,"enabled":true,"type":"FREE","channelId":"UCsvS1__wPMXEPbtFzgpX3nQ","showPreferences":false,"subscribedButtonText":{"runs":[{"text":"구독중"}]},"unsubscribedButtonText":{"runs":[{"text":"구독"}]},"trackingParams":"CIACEJsrIhMI98LMpZqIkgMV_FN4AB0OKg_vKPgdMgV3YXRjaA==","unsubscribeButtonText":{"runs":[{"text":"구독 취소"}]},"subscribeAccessibility":{"accessibilityData":{"label":"Dave Briccetti을(를) 구독합니다."}},"unsubscribeAccessibility":{"accessibilityData":{"label":"Dave Briccetti을(를) 구독 취소합니다."}},"notificationPreferenceButton":{"subscriptionNotificationToggleButtonRenderer":{"states":[{"stateId":3,"nextStateId":3,"state":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"icon":{"iconType":"NOTIFICATIONS_NONE"},"accessibility":{"label":"현재 설정은 맞춤설정 알림 수신입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."},"trackingParams":"CI0CEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","accessibilityData":{"accessibilityData":{"label":"현재 설정은 맞춤설정 알림 수신입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."}}}}},{"stateId":0,"nextStateId":0,"state":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"icon":{"iconType":"NOTIFICATIONS_OFF"},"accessibility":{"label":"현재 설정은 알림 수신 안함입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."},"trackingParams":"CIwCEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","accessibilityData":{"accessibilityData":{"label":"현재 설정은 알림 수신 안함입니다. Dave Briccetti 채널의 알림 설정을 변경하려면 탭하세요."}}}}}],"currentStateId":3,"trackingParams":"CIUCEJf5ASITCPfCzKWaiJIDFfxTeAAdDioP7w==","command":{"clickTrackingParams":"CIUCEJf5ASITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandExecutorCommand":{"commands":[{"clickTrackingParams":"CIUCEJf5ASITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","openPopupAction":{"popup":{"menuPopupRenderer":{"items":[{"menuServiceItemRenderer":{"text":{"simpleText":"맞춤설정"},"icon":{"iconType":"NOTIFICATIONS_NONE"},"serviceEndpoint":{"clickTrackingParams":"CIsCEOy1BBgDIhMI98LMpZqIkgMV_FN4AB0OKg_vMhJQUkVGRVJFTkNFX0RFRkFVTFTKAQTLOVDE","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/notification/modify_channel_preference"}},"modifyChannelNotificationPreferenceEndpoint":{"params":"ChhVQ3N2UzFfX3dQTVhFUGJ0RnpncFgzblESAggBGAAgBFITCgIIAxILRHIzQ2NTUFJ1bkEYAA%3D%3D"}},"trackingParams":"CIsCEOy1BBgDIhMI98LMpZqIkgMV_FN4AB0OKg_v","isSelected":true}},{"menuServiceItemRenderer":{"text":{"simpleText":"없음"},"icon":{"iconType":"NOTIFICATIONS_OFF"},"serviceEndpoint":{"clickTrackingParams":"CIoCEO21BBgEIhMI98LMpZqIkgMV_FN4AB0OKg_vMhtQUkVGRVJFTkNFX05PX05PVElGSUNBVElPTlPKAQTLOVDE","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/notification/modify_channel_preference"}},"modifyChannelNotificationPreferenceEndpoint":{"params":"ChhVQ3N2UzFfX3dQTVhFUGJ0RnpncFgzblESAggDGAAgBFITCgIIAxILRHIzQ2NTUFJ1bkEYAA%3D%3D"}},"trackingParams":"CIoCEO21BBgEIhMI98LMpZqIkgMV_FN4AB0OKg_v","isSelected":false}},{"menuServiceItemRenderer":{"text":{"runs":[{"text":"구독 취소"}]},"icon":{"iconType":"PERSON_MINUS"},"serviceEndpoint":{"clickTrackingParams":"CIYCENuLChgFIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CIYCENuLChgFIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","openPopupAction":{"popup":{"confirmDialogRenderer":{"trackingParams":"CIcCEMY4IhMI98LMpZqIkgMV_FN4AB0OKg_v","dialogMessages":[{"runs":[{"text":"Dave Briccetti"},{"text":" 구독을 취소하시겠습니까?"}]}],"confirmButton":{"buttonRenderer":{"style":"STYLE_BLUE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"구독 취소"}]},"serviceEndpoint":{"clickTrackingParams":"CIkCEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vMgV3YXRjaMoBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/subscription/unsubscribe"}},"unsubscribeEndpoint":{"channelIds":["UCsvS1__wPMXEPbtFzgpX3nQ"],"params":"CgIIAxILRHIzQ2NTUFJ1bkEYAA%3D%3D"}},"accessibility":{"label":"구독 취소"},"trackingParams":"CIkCEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"cancelButton":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"취소"}]},"accessibility":{"label":"취소"},"trackingParams":"CIgCEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"primaryIsCancel":false}},"popupType":"DIALOG"}}]}},"trackingParams":"CIYCENuLChgFIhMI98LMpZqIkgMV_FN4AB0OKg_v"}}]}},"popupType":"DROPDOWN"}}]}},"targetId":"notification-bell","secondaryIcon":{"iconType":"EXPAND_MORE"}}},"targetId":"watch-subscribe","signInEndpoint":{"clickTrackingParams":"CIACEJsrIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"ignoreNavigation":true}},"modalEndpoint":{"modal":{"modalWithTitleAndButtonRenderer":{"title":{"simpleText":"채널을 구독하시겠습니까?"},"content":{"simpleText":"채널을 구독하려면 로그인하세요."},"button":{"buttonRenderer":{"style":"STYLE_MONO_FILLED","size":"SIZE_DEFAULT","isDisabled":false,"text":{"simpleText":"로그인"},"navigationEndpoint":{"clickTrackingParams":"CIQCEP2GBCITCPfCzKWaiJIDFfxTeAAdDioP7zIJc3Vic2NyaWJlygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko%26next%3D%252Fwatch%253Fv%253DDr3CcSPRunA%26continue_action%3DQUFFLUhqa0ljTVZZTGc2SkltY0x6OVhFLURFZXRmZDVLQXxBQ3Jtc0tuYm9OQU9EWHJEa0YwbHRiWXQxTHBoT2paVzFlUnNDX2cxa0RkTFRYdXFKUnh6UDZKQ3pkakY1MEh2MHl3Vnk2cmhzQTNicS1SVFVHbVlhX1hyR3d6LVVHSUcycEFLOUcxOVVtOTVpRGtoek5XOTVqdTJCMVJmZEhGeHRRZDU4dmVZZDBPUENBM1NOZ28zd1hjQ2FkaEpaa3p4TURteWJZN2lRS2VpUlJCS2JIcE5Cek1EREx2eGM3OUI3UHdTb1pZVUZ0dE0\u0026hl=ko\u0026ec=66429","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CIQCEP2GBCITCPfCzKWaiJIDFfxTeAAdDioP78oBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=Dr3CcSPRunA","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"Dr3CcSPRunA","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr3---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=0ebdc27123d1ba70\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc="}}}}},"continueAction":"QUFFLUhqa0ljTVZZTGc2SkltY0x6OVhFLURFZXRmZDVLQXxBQ3Jtc0tuYm9OQU9EWHJEa0YwbHRiWXQxTHBoT2paVzFlUnNDX2cxa0RkTFRYdXFKUnh6UDZKQ3pkakY1MEh2MHl3Vnk2cmhzQTNicS1SVFVHbVlhX1hyR3d6LVVHSUcycEFLOUcxOVVtOTVpRGtoek5XOTVqdTJCMVJmZEhGeHRRZDU4dmVZZDBPUENBM1NOZ28zd1hjQ2FkaEpaa3p4TURteWJZN2lRS2VpUlJCS2JIcE5Cek1EREx2eGM3OUI3UHdTb1pZVUZ0dE0","idamTag":"66429"}},"trackingParams":"CIQCEP2GBCITCPfCzKWaiJIDFfxTeAAdDioP7w=="}}}}}},"subscribedEntityKey":"EhhVQ3N2UzFfX3dQTVhFUGJ0RnpncFgzblEgMygB","onSubscribeEndpoints":[{"clickTrackingParams":"CIACEJsrIhMI98LMpZqIkgMV_FN4AB0OKg_vKPgdMgV3YXRjaMoBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/subscription/subscribe"}},"subscribeEndpoint":{"channelIds":["UCsvS1__wPMXEPbtFzgpX3nQ"],"params":"EgIIAxgAIgtEcjNDY1NQUnVuQQ%3D%3D"}}],"onUnsubscribeEndpoints":[{"clickTrackingParams":"CIACEJsrIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CIACEJsrIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","openPopupAction":{"popup":{"confirmDialogRenderer":{"trackingParams":"CIECEMY4IhMI98LMpZqIkgMV_FN4AB0OKg_v","dialogMessages":[{"runs":[{"text":"Dave Briccetti"},{"text":" 구독을 취소하시겠습니까?"}]}],"confirmButton":{"buttonRenderer":{"style":"STYLE_BLUE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"구독 취소"}]},"serviceEndpoint":{"clickTrackingParams":"CIMCEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vKPgdMgV3YXRjaMoBBMs5UMQ=","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/subscription/unsubscribe"}},"unsubscribeEndpoint":{"channelIds":["UCsvS1__wPMXEPbtFzgpX3nQ"],"params":"CgIIAxILRHIzQ2NTUFJ1bkEYAA%3D%3D"}},"accessibility":{"label":"구독 취소"},"trackingParams":"CIMCEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"cancelButton":{"buttonRenderer":{"style":"STYLE_TEXT","size":"SIZE_DEFAULT","isDisabled":false,"text":{"runs":[{"text":"취소"}]},"accessibility":{"label":"취소"},"trackingParams":"CIICEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"primaryIsCancel":false}},"popupType":"DIALOG"}}]}}]}},"metadataRowContainer":{"metadataRowContainerRenderer":{"rows":[{"metadataRowRenderer":{"title":{"runs":[{"text":"라이선스"}]},"contents":[{"runs":[{"text":"크리에이티브 커먼즈 저작자 표시 라이선스(재사용 허용)","navigationEndpoint":{"clickTrackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"https://www.youtube.com/t/creative_commons","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"urlEndpoint":{"url":"https://www.youtube.com/t/creative_commons"}}}]}],"trackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_v"}}],"collapsedItemCount":0,"trackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"showMoreText":{"simpleText":"...더보기"},"showLessText":{"simpleText":"간략히"},"trackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_v","defaultExpanded":false,"descriptionCollapsedLines":3,"showMoreCommand":{"clickTrackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandExecutorCommand":{"commands":[{"clickTrackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","changeEngagementPanelVisibilityAction":{"targetId":"engagement-panel-structured-description","visibility":"ENGAGEMENT_PANEL_VISIBILITY_EXPANDED"}},{"clickTrackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","scrollToEngagementPanelCommand":{"targetId":"engagement-panel-structured-description"}}]}},"showLessCommand":{"clickTrackingParams":"CP8BEM2rARgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","changeEngagementPanelVisibilityAction":{"targetId":"engagement-panel-structured-description","visibility":"ENGAGEMENT_PANEL_VISIBILITY_HIDDEN"}},"attributedDescription":{"content":"Interviews with programmers and hardware hackers from SuperHappyDevHouse 34, part 2 of 2. \r\n\r\nKevin Gadd talks about his dad bringing home computers for him to play with, and learning to program them, and his career making videogames. Drew Perttula's dad brought home computers but no games, so Drew had to write his own. He got interested in visual effects and computer graphics. He works at DreamWorks Animation. Mike Lundy works for Nasa in Mountain View. He started programming around age 10 on a PCjr running BASIC. He works in the Intelligent Robotics Group at NASA. He hopes the software they have written will eventually run on the Moon or Mars.","styleRuns":[{"startIndex":0,"length":653,"styleRunExtensions":{"styleRunColorMapExtension":{"colorMap":[{"key":"USER_INTERFACE_THEME_DARK","value":4294967295},{"key":"USER_INTERFACE_THEME_LIGHT","value":4279440147}]}},"fontFamilyName":"Roboto"}]},"headerRuns":[{"startIndex":0,"length":653,"headerMapping":"ATTRIBUTED_STRING_HEADER_MAPPING_UNSPECIFIED"}]}},{"compositeVideoPrimaryInfoRenderer":{}},{"itemSectionRenderer":{"contents":[{"continuationItemRenderer":{"trigger":"CONTINUATION_TRIGGER_ON_ITEM_SHOWN","continuationEndpoint":{"clickTrackingParams":"CP4BELsvGAMiEwj3wsylmoiSAxX8U3gAHQ4qD-_KAQTLOVDE","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/next"}},"continuationCommand":{"token":"Eg0SC0RyM0NjU1BSdW5BGAYyJSIRIgtEcjNDY1NQUnVuQTAAeAJCEGNvbW1lbnRzLXNlY3Rpb24%3D","request":"CONTINUATION_REQUEST_TYPE_WATCH_NEXT"}}}}],"trackingParams":"CP4BELsvGAMiEwj3wsylmoiSAxX8U3gAHQ4qD-8=","sectionIdentifier":"comment-item-section","targetId":"comments-section"}}],"trackingParams":"CP0BELovIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},"secondaryResults":{"secondaryResults":{"results":[{"lockupViewModel":{"contentImage":{"thumbnailViewModel":{"image":{"sources":[{"url":"https://i.ytimg.com/vi/2sR6qs5qhls/hqdefault.jpg?sqp=-oaymwEiCKgBEF5IWvKriqkDFQgBFQAAAAAYASUAAMhCPQCAokN4AQ==\u0026rs=AOn4CLBRT4FQpJphOtexvsope4A1514HJA","width":168,"height":94},{"url":"https://i.ytimg.com/vi/2sR6qs5qhls/hqdefault.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=\u0026rs=AOn4CLB9SjW_6Fts9qWDBhwm64LgEO4TaQ","width":336,"height":188}]},"overlays":[{"thumbnailOverlayBadgeViewModel":{"thumbnailBadges":[{"thumbnailBadgeViewModel":{"text":"15:31","badgeStyle":"THUMBNAIL_OVERLAY_BADGE_STYLE_DEFAULT","animationActivationTargetId":"2sR6qs5qhls","animationActivationEntityKey":"Eh8veW91dHViZS9hcHAvd2F0Y2gvcGxheWVyX3N0YXRlIMMCKAE%3D","lottieData":{"url":"https://www.gstatic.com/youtube/img/lottie/audio_indicator/audio_indicator_v2.json","settings":{"loop":true,"autoplay":true}},"animatedText":"지금 재생 중","animationActivationEntitySelectorType":"THUMBNAIL_BADGE_ANIMATION_ENTITY_SELECTOR_TYPE_PLAYER_STATE","rendererContext":{"accessibilityContext":{"label":"15분 31초"}}}}],"position":"THUMBNAIL_OVERLAY_BADGE_POSITION_BOTTOM_END"}},{"thumbnailHoverOverlayToggleActionsViewModel":{"buttons":[{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"WATCH_LATER","onTap":{"innertubeCommand":{"clickTrackingParams":"CPwBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"addedVideoId":"2sR6qs5qhls","action":"ACTION_ADD_VIDEO"}]}}},"accessibilityText":"나중에 볼 동영상","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPwBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","onTap":{"innertubeCommand":{"clickTrackingParams":"CPsBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"action":"ACTION_REMOVE_VIDEO_BY_VIDEO_ID","removedVideoId":"2sR6qs5qhls"}]}}},"accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPsBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"CPQBENTEDBgAIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"ADD_TO_QUEUE_TAIL","onTap":{"innertubeCommand":{"clickTrackingParams":"CPoBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CPoBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","addToPlaylistCommand":{"openMiniplayer":false,"openListPanel":true,"videoId":"2sR6qs5qhls","listType":"PLAYLIST_EDIT_LIST_TYPE_QUEUE","onCreateListCommand":{"clickTrackingParams":"CPoBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/playlist/create"}},"createPlaylistServiceEndpoint":{"videoIds":["2sR6qs5qhls"],"params":"CAQ%3D"}},"videoIds":["2sR6qs5qhls"],"videoCommand":{"clickTrackingParams":"CPoBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=2sR6qs5qhls","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"2sR6qs5qhls","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr2---sn-ab02a0nfpgxapox-bh2zr.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=dac47aaace6a865b\u0026ip=1.208.108.242\u0026initcwndbps=4391250\u0026mt=1768296317\u0026oweuc="}}}}}}}]}}},"accessibilityText":"현재 재생목록에 추가","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPoBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPkBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"CPQBENTEDBgAIhMI98LMpZqIkgMV_FN4AB0OKg_v"}}]}}]}},"metadata":{"lockupMetadataViewModel":{"title":{"content":"1975년 세계 최초의 노트북은 아직도 돌아갈까? (직접 켜봄)"},"image":{"decoratedAvatarViewModel":{"avatar":{"avatarViewModel":{"image":{"sources":[{"url":"https://yt3.ggpht.com/Ua1M2HJt8ao-LhN09amkKirMSMckGxwHA7_yM2OPSg5jAWCWdbQrtqu2v3Fw7vm7bCkOc3Gg=s68-c-k-c0x00ffffff-no-rj","width":68,"height":68}]},"avatarImageSize":"AVATAR_SIZE_M"}},"a11yLabel":"긱블 Geekble 채널로 이동","rendererContext":{"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CPQBENTEDBgAIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/@geekble_kr","webPageType":"WEB_PAGE_TYPE_CHANNEL","rootVe":3611,"apiUrl":"/youtubei/v1/browse"}},"browseEndpoint":{"browseId":"UCp94pzrtA5wPyZazbDq0CXA","canonicalBaseUrl":"/@geekble_kr"}}}}}}},"metadata":{"contentMetadataViewModel":{"metadataRows":[{"metadataParts":[{"text":{"content":"긱블 Geekble","styleRuns":[{"startIndex":10,"styleRunExtensions":{"styleRunColorMapExtension":{"colorMap":[{"key":"USER_INTERFACE_THEME_DARK","value":4289374890},{"key":"USER_INTERFACE_THEME_LIGHT","value":4284506208}]}}}],"attachmentRuns":[{"startIndex":10,"length":0,"element":{"type":{"imageType":{"image":{"sources":[{"clientResource":{"imageName":"CHECK_CIRCLE_FILLED"},"width":14,"height":14}]}}},"properties":{"layoutProperties":{"height":{"value":14,"unit":"DIMENSION_UNIT_POINT"},"width":{"value":14,"unit":"DIMENSION_UNIT_POINT"},"margin":{"left":{"value":4,"unit":"DIMENSION_UNIT_POINT"}}}}},"alignment":"ALIGNMENT_VERTICAL_CENTER"}]}}]},{"metadataParts":[{"text":{"content":"조회수 38만회"}},{"text":{"content":"9개월 전"}}]}],"delimiter":" • "}},"menuButton":{"buttonViewModel":{"iconName":"MORE_VERT","onTap":{"innertubeCommand":{"clickTrackingParams":"CPUBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","showSheetCommand":{"panelLoadingStrategy":{"inlineContent":{"sheetViewModel":{"content":{"listViewModel":{"listItems":[{"listItemViewModel":{"title":{"content":"현재 재생목록에 추가"},"leadingImage":{"sources":[{"clientResource":{"imageName":"ADD_TO_QUEUE_TAIL"}}]},"rendererContext":{"loggingContext":{"loggingDirectives":{"trackingParams":"CPgBEP6YBBgAIhMI98LMpZqIkgMV_FN4AB0OKg_v","visibility":{"types":"12"}}},"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CPgBEP6YBBgAIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CPgBEP6YBBgAIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","addToPlaylistCommand":{"openMiniplayer":true,"videoId":"2sR6qs5qhls","listType":"PLAYLIST_EDIT_LIST_TYPE_QUEUE","onCreateListCommand":{"clickTrackingParams":"CPgBEP6YBBgAIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/playlist/create"}},"createPlaylistServiceEndpoint":{"videoIds":["2sR6qs5qhls"],"params":"CAQ%3D"}},"videoIds":["2sR6qs5qhls"],"videoCommand":{"clickTrackingParams":"CPgBEP6YBBgAIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=2sR6qs5qhls\u0026pp=0gcJCUUEdf6zKzOD","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"2sR6qs5qhls","playerParams":"0gcJCUUEdf6zKzOD","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr2---sn-ab02a0nfpgxapox-bh2zr.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=dac47aaace6a865b\u0026ip=1.208.108.242\u0026initcwndbps=4391250\u0026mt=1768296317\u0026oweuc="}}}}}}}]}}}}}}},{"listItemViewModel":{"title":{"content":"재생목록에 저장"},"leadingImage":{"sources":[{"clientResource":{"imageName":"BOOKMARK_BORDER"}}]},"rendererContext":{"loggingContext":{"loggingDirectives":{"trackingParams":"CPcBEJSsCRgBIhMI98LMpZqIkgMV_FN4AB0OKg_v","visibility":{"types":"12"}}},"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CPcBEJSsCRgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"https://accounts.google.com/ServiceLogin?service=youtube\u0026uilel=3\u0026passive=true\u0026continue=https%3A%2F%2Fwww.youtube.com%2Fsignin%3Faction_handle_signin%3Dtrue%26app%3Ddesktop%26hl%3Dko\u0026hl=ko","webPageType":"WEB_PAGE_TYPE_UNKNOWN","rootVe":83769}},"signInEndpoint":{"nextEndpoint":{"clickTrackingParams":"CPcBEJSsCRgBIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","showSheetCommand":{"panelLoadingStrategy":{"requestTemplate":{"panelId":"PAadd_to_playlist","params":"-gYNCgsyc1I2cXM1cWhscw%3D%3D"}}}}}}}}}}},{"listItemViewModel":{"title":{"content":"공유"},"leadingImage":{"sources":[{"clientResource":{"imageName":"SHARE"}}]},"rendererContext":{"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CPUBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/share/get_share_panel"}},"shareEntityServiceEndpoint":{"serializedShareEntity":"Cgsyc1I2cXM1cWhscw%3D%3D","commands":[{"clickTrackingParams":"CPUBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","openPopupAction":{"popup":{"unifiedSharePanelRenderer":{"trackingParams":"CPYBEI5iIhMI98LMpZqIkgMV_FN4AB0OKg_v","showLoadingSpinner":true}},"popupType":"DIALOG","beReused":true}}]}}}}}}}]}}}}}}}},"accessibilityText":"추가 작업","style":"BUTTON_VIEW_MODEL_STYLE_MONO","trackingParams":"CPUBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TEXT","buttonSize":"BUTTON_VIEW_MODEL_SIZE_DEFAULT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}}}},"contentId":"2sR6qs5qhls","contentType":"LOCKUP_CONTENT_TYPE_VIDEO","rendererContext":{"loggingContext":{"loggingDirectives":{"trackingParams":"CPQBENTEDBgAIhMI98LMpZqIkgMV_FN4AB0OKg_v","visibility":{"types":"12"}}},"accessibilityContext":{"label":"1975년 세계 최초의 노트북은 아직도 돌아갈까? (직접 켜봄) 15분"},"commandContext":{"onTap":{"innertubeCommand":{"clickTrackingParams":"CPQBENTEDBgAIhMI98LMpZqIkgMV_FN4AB0OKg_vMgdyZWxhdGVkSPD0xp6SzvDeDpoBBQgBEPgdygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=2sR6qs5qhls","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"2sR6qs5qhls","nofollow":true,"watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr2---sn-ab02a0nfpgxapox-bh2zr.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=dac47aaace6a865b\u0026ip=1.208.108.242\u0026initcwndbps=4391250\u0026mt=1768296317\u0026oweuc="}}}}}}}}}},{"lockupViewModel":{"contentImage":{"thumbnailViewModel":{"image":{"sources":[{"url":"https://i.ytimg.com/vi/nOmfIox_a4E/hqdefault.jpg?sqp=-oaymwEiCKgBEF5IWvKriqkDFQgBFQAAAAAYASUAAMhCPQCAokN4AQ==\u0026rs=AOn4CLAMw5sYzzF4K2EtZCm1U6VYdGubZQ","width":168,"height":94},{"url":"https://i.ytimg.com/vi/nOmfIox_a4E/hqdefault.jpg?sqp=-oaymwEjCNACELwBSFryq4qpAxUIARUAAAAAGAElAADIQj0AgKJDeAE=\u0026rs=AOn4CLCcVNOvNS8G-3_McKs0uFShkTe2Hw","width":336,"height":188}]},"overlays":[{"thumbnailOverlayBadgeViewModel":{"thumbnailBadges":[{"thumbnailBadgeViewModel":{"text":"8:41","badgeStyle":"THUMBNAIL_OVERLAY_BADGE_STYLE_DEFAULT","animationActivationTargetId":"nOmfIox_a4E","animationActivationEntityKey":"Eh8veW91dHViZS9hcHAvd2F0Y2gvcGxheWVyX3N0YXRlIMMCKAE%3D","lottieData":{"url":"https://www.gstatic.com/youtube/img/lottie/audio_indicator/audio_indicator_v2.json","settings":{"loop":true,"autoplay":true}},"animatedText":"지금 재생 중","animationActivationEntitySelectorType":"THUMBNAIL_BADGE_ANIMATION_ENTITY_SELECTOR_TYPE_PLAYER_STATE","rendererContext":{"accessibilityContext":{"label":"8분 41초"}}}}],"position":"THUMBNAIL_OVERLAY_BADGE_POSITION_BOTTOM_END"}},{"thumbnailHoverOverlayToggleActionsViewModel":{"buttons":[{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"WATCH_LATER","onTap":{"innertubeCommand":{"clickTrackingParams":"CPMBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"addedVideoId":"nOmfIox_a4E","action":"ACTION_ADD_VIDEO"}]}}},"accessibilityText":"나중에 볼 동영상","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPMBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","onTap":{"innertubeCommand":{"clickTrackingParams":"CPIBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/browse/edit_playlist"}},"playlistEditEndpoint":{"playlistId":"WL","actions":[{"action":"ACTION_REMOVE_VIDEO_BY_VIDEO_ID","removedVideoId":"nOmfIox_a4E"}]}}},"accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPIBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"COsBENTEDBgBIhMI98LMpZqIkgMV_FN4AB0OKg_v"}},{"toggleButtonViewModel":{"defaultButtonViewModel":{"buttonViewModel":{"iconName":"ADD_TO_QUEUE_TAIL","onTap":{"innertubeCommand":{"clickTrackingParams":"CPEBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true}},"signalServiceEndpoint":{"signal":"CLIENT_SIGNAL","actions":[{"clickTrackingParams":"CPEBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","addToPlaylistCommand":{"openMiniplayer":false,"openListPanel":true,"videoId":"nOmfIox_a4E","listType":"PLAYLIST_EDIT_LIST_TYPE_QUEUE","onCreateListCommand":{"clickTrackingParams":"CPEBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"sendPost":true,"apiUrl":"/youtubei/v1/playlist/create"}},"createPlaylistServiceEndpoint":{"videoIds":["nOmfIox_a4E"],"params":"CAQ%3D"}},"videoIds":["nOmfIox_a4E"],"videoCommand":{"clickTrackingParams":"CPEBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_vygEEyzlQxA==","commandMetadata":{"webCommandMetadata":{"url":"/watch?v=nOmfIox_a4E","webPageType":"WEB_PAGE_TYPE_WATCH","rootVe":3832}},"watchEndpoint":{"videoId":"nOmfIox_a4E","watchEndpointSupportedOnesieConfig":{"html5PlaybackOnesieConfig":{"commonConfig":{"url":"https://rr1---sn-ab02a0nfpgxapox-bh2es.googlevideo.com/initplayback?source=youtube\u0026oeis=1\u0026c=WEB\u0026oad=3200\u0026ovd=3200\u0026oaad=11000\u0026oavd=11000\u0026ocs=700\u0026oewis=1\u0026oputc=1\u0026ofpcc=1\u0026msp=1\u0026odepv=1\u0026id=9ce99f228c7f6b81\u0026ip=1.208.108.242\u0026initcwndbps=3117500\u0026mt=1768296317\u0026oweuc="}}}}}}}]}}},"accessibilityText":"현재 재생목록에 추가","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPEBEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"toggledButtonViewModel":{"buttonViewModel":{"iconName":"CHECK","accessibilityText":"추가됨","style":"BUTTON_VIEW_MODEL_STYLE_OVERLAY_DARK","trackingParams":"CPABEPBbIhMI98LMpZqIkgMV_FN4AB0OKg_v","type":"BUTTON_VIEW_MODEL_TYPE_TONAL","buttonSize":"BUTTON_VIEW_MODEL_SIZE_COMPACT","state":"BUTTON_VIEW_MODEL_STATE_ACTIVE"}},"isToggled":false,"trackingParams":"COsBENTEDBgBIhMI98LMpZqIkgMV_FN4AB0OKg_v"}}]}}]}},"metadata":{"lockupMetadataViewModel":{"title":{"content":"SuperHappyDevHouse 34 Interviews, Part One"},"image":{"decoratedAvatarViewModel":{"avatar":{"avatarViewModel":{"image":{"sources":[{"url":"https://yt3.ggpht.com/ytc/AIdro_mHw7HusQPzx3ygjYTPVtwu03IL1hIKrV-D50mfjR2SIZY=s68-c-k-c0x00ffffff-no-rj","width":68,"height":68}]},"avatarImageSize":"AVATAR_SIZE_M"}},"a11yLabel":"Dave Briccetti 채널로 이 | 2026-01-13T09:30:35 |
https://status.visma.com/ | Visma Cloud Services Status Subscribe to Updates Subscribe x Get email notifications whenever Visma Cloud Services creates , updates or resolves an incident. Email address: Enter OTP: Resend OTP in: seconds Didn't receive the OTP? Resend OTP This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Get text message notifications whenever Visma Cloud Services creates or resolves an incident. Country code: Afghanistan (+93) Albania (+355) Algeria (+213) American Samoa (+1) Andorra (+376) Angola (+244) Anguilla (+1) Antigua and Barbuda (+1) Argentina (+54) Armenia (+374) Aruba (+297) Australia/Cocos/Christmas Island (+61) Austria (+43) Azerbaijan (+994) Bahamas (+1) Bahrain (+973) Bangladesh (+880) Barbados (+1) Belarus (+375) Belgium (+32) Belize (+501) Benin (+229) Bermuda (+1) Bolivia (+591) Bosnia and Herzegovina (+387) Botswana (+267) Brazil (+55) Brunei (+673) Bulgaria (+359) Burkina Faso (+226) Burundi (+257) Cambodia (+855) Cameroon (+237) Canada (+1) Cape Verde (+238) Cayman Islands (+1) Central Africa (+236) Chad (+235) Chile (+56) China (+86) Colombia (+57) Comoros (+269) Congo (+242) Congo, Dem Rep (+243) Costa Rica (+506) Croatia (+385) Cyprus (+357) Czech Republic (+420) Denmark (+45) Djibouti (+253) Dominica (+1) Dominican Republic (+1) Egypt (+20) El Salvador (+503) Equatorial Guinea (+240) Estonia (+372) Ethiopia (+251) Faroe Islands (+298) Fiji (+679) Finland/Aland Islands (+358) France (+33) French Guiana (+594) French Polynesia (+689) Gabon (+241) Gambia (+220) Georgia (+995) Germany (+49) Ghana (+233) Gibraltar (+350) Greece (+30) Greenland (+299) Grenada (+1) Guadeloupe (+590) Guam (+1) Guatemala (+502) Guinea (+224) Guyana (+592) Haiti (+509) Honduras (+504) Hong Kong (+852) Hungary (+36) Iceland (+354) India (+91) Indonesia (+62) Iraq (+964) Ireland (+353) Israel (+972) Italy (+39) Jamaica (+1) Japan (+81) Jordan (+962) Kenya (+254) Korea, Republic of (+82) Kosovo (+383) Kuwait (+965) Kyrgyzstan (+996) Laos (+856) Latvia (+371) Lebanon (+961) Lesotho (+266) Liberia (+231) Libya (+218) Liechtenstein (+423) Lithuania (+370) Luxembourg (+352) Macao (+853) Macedonia (+389) Madagascar (+261) Malawi (+265) Malaysia (+60) Maldives (+960) Mali (+223) Malta (+356) Martinique (+596) Mauritania (+222) Mauritius (+230) Mexico (+52) Monaco (+377) Mongolia (+976) Montenegro (+382) Montserrat (+1) Morocco/Western Sahara (+212) Mozambique (+258) Namibia (+264) Nepal (+977) Netherlands (+31) New Zealand (+64) Nicaragua (+505) Niger (+227) Nigeria (+234) Norway (+47) Oman (+968) Pakistan (+92) Palestinian Territory (+970) Panama (+507) Paraguay (+595) Peru (+51) Philippines (+63) Poland (+48) Portugal (+351) Puerto Rico (+1) Qatar (+974) Reunion/Mayotte (+262) Romania (+40) Russia/Kazakhstan (+7) Rwanda (+250) Samoa (+685) San Marino (+378) Saudi Arabia (+966) Senegal (+221) Serbia (+381) Seychelles (+248) Sierra Leone (+232) Singapore (+65) Slovakia (+421) Slovenia (+386) South Africa (+27) Spain (+34) Sri Lanka (+94) St Kitts and Nevis (+1) St Lucia (+1) St Vincent Grenadines (+1) Sudan (+249) Suriname (+597) Swaziland (+268) Sweden (+46) Switzerland (+41) Taiwan (+886) Tajikistan (+992) Tanzania (+255) Thailand (+66) Togo (+228) Tonga (+676) Trinidad and Tobago (+1) Tunisia (+216) Turkey (+90) Turks and Caicos Islands (+1) Uganda (+256) Ukraine (+380) United Arab Emirates (+971) United Kingdom (+44) United States (+1) Uruguay (+598) Uzbekistan (+998) Venezuela (+58) Vietnam (+84) Virgin Islands, British (+1) Virgin Islands, U.S. (+1) Yemen (+967) Zambia (+260) Zimbabwe (+263) Phone number: Change number Enter OTP: Resend OTP in: 30 seconds Didn't receive the OTP? Resend OTP Message and data rates may apply. By subscribing you agree to the Atlassian Terms of Service , and the Atlassian Privacy Policy . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Get incident updates and maintenance status messages in Slack. Subscribe via Slack By subscribing you agree to the Atlassian Cloud Terms of Service and acknowledge Atlassian's Privacy Policy . Get webhook notifications whenever Visma Cloud Services creates an incident, updates an incident, resolves an incident or changes a component status. Webhook URL: The URL we should send the webhooks to Email address: We'll send you email if your endpoint fails This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Get the Atom Feed or RSS Feed . We are experiencing issues importing bank transactions in Spiris Bokföring & Fakturering / Visma eAccounting Subscribe Monitoring - A fix has been implemented and we are monitoring the results. Jan 13 , 2026 - 10:13 CET Identified - The issue has been identified and a fix is being implemented. Jan 13 , 2026 - 08:54 CET Investigating - We are currently investigating an issue with bank transactions in Spiris – Bokföring & Fakturering / Visma eAccounting. We have identified the problem and are implementing a fix right away. Jan 13 , 2026 - 08:42 CET × Subscribe to Incident Subscribe to updates for We are experiencing issues importing bank transactions in Spiris Bokföring & Fakturering / Visma eAccounting via email and/or text message. You'll receive email notifications when incidents are updated, and text message notifications whenever Visma Cloud Services creates or resolves an incident. VIA EMAIL: VIA SMS: Afghanistan (+93) Albania (+355) Algeria (+213) American Samoa (+1) Andorra (+376) Angola (+244) Anguilla (+1) Antigua and Barbuda (+1) Argentina (+54) Armenia (+374) Aruba (+297) Australia/Cocos/Christmas Island (+61) Austria (+43) Azerbaijan (+994) Bahamas (+1) Bahrain (+973) Bangladesh (+880) Barbados (+1) Belarus (+375) Belgium (+32) Belize (+501) Benin (+229) Bermuda (+1) Bolivia (+591) Bosnia and Herzegovina (+387) Botswana (+267) Brazil (+55) Brunei (+673) Bulgaria (+359) Burkina Faso (+226) Burundi (+257) Cambodia (+855) Cameroon (+237) Canada (+1) Cape Verde (+238) Cayman Islands (+1) Central Africa (+236) Chad (+235) Chile (+56) China (+86) Colombia (+57) Comoros (+269) Congo (+242) Congo, Dem Rep (+243) Costa Rica (+506) Croatia (+385) Cyprus (+357) Czech Republic (+420) Denmark (+45) Djibouti (+253) Dominica (+1) Dominican Republic (+1) Egypt (+20) El Salvador (+503) Equatorial Guinea (+240) Estonia (+372) Ethiopia (+251) Faroe Islands (+298) Fiji (+679) Finland/Aland Islands (+358) France (+33) French Guiana (+594) French Polynesia (+689) Gabon (+241) Gambia (+220) Georgia (+995) Germany (+49) Ghana (+233) Gibraltar (+350) Greece (+30) Greenland (+299) Grenada (+1) Guadeloupe (+590) Guam (+1) Guatemala (+502) Guinea (+224) Guyana (+592) Haiti (+509) Honduras (+504) Hong Kong (+852) Hungary (+36) Iceland (+354) India (+91) Indonesia (+62) Iraq (+964) Ireland (+353) Israel (+972) Italy (+39) Jamaica (+1) Japan (+81) Jordan (+962) Kenya (+254) Korea, Republic of (+82) Kosovo (+383) Kuwait (+965) Kyrgyzstan (+996) Laos (+856) Latvia (+371) Lebanon (+961) Lesotho (+266) Liberia (+231) Libya (+218) Liechtenstein (+423) Lithuania (+370) Luxembourg (+352) Macao (+853) Macedonia (+389) Madagascar (+261) Malawi (+265) Malaysia (+60) Maldives (+960) Mali (+223) Malta (+356) Martinique (+596) Mauritania (+222) Mauritius (+230) Mexico (+52) Monaco (+377) Mongolia (+976) Montenegro (+382) Montserrat (+1) Morocco/Western Sahara (+212) Mozambique (+258) Namibia (+264) Nepal (+977) Netherlands (+31) New Zealand (+64) Nicaragua (+505) Niger (+227) Nigeria (+234) Norway (+47) Oman (+968) Pakistan (+92) Palestinian Territory (+970) Panama (+507) Paraguay (+595) Peru (+51) Philippines (+63) Poland (+48) Portugal (+351) Puerto Rico (+1) Qatar (+974) Reunion/Mayotte (+262) Romania (+40) Russia/Kazakhstan (+7) Rwanda (+250) Samoa (+685) San Marino (+378) Saudi Arabia (+966) Senegal (+221) Serbia (+381) Seychelles (+248) Sierra Leone (+232) Singapore (+65) Slovakia (+421) Slovenia (+386) South Africa (+27) Spain (+34) Sri Lanka (+94) St Kitts and Nevis (+1) St Lucia (+1) St Vincent Grenadines (+1) Sudan (+249) Suriname (+597) Swaziland (+268) Sweden (+46) Switzerland (+41) Taiwan (+886) Tajikistan (+992) Tanzania (+255) Thailand (+66) Togo (+228) Tonga (+676) Trinidad and Tobago (+1) Tunisia (+216) Turkey (+90) Turks and Caicos Islands (+1) Uganda (+256) Ukraine (+380) United Arab Emirates (+971) United Kingdom (+44) United States (+1) Uruguay (+598) Uzbekistan (+998) Venezuela (+58) Vietnam (+84) Virgin Islands, British (+1) Virgin Islands, U.S. (+1) Yemen (+967) Zambia (+260) Zimbabwe (+263) Enter mobile number Edit number Send OTP Enter the OTP sent Resend OTP in 30 seconds To receive SMS updates, please verify your number. To proceed with just email click ‘Subscribe’ Subscribe to Incident Message and data rates may apply. By subscribing you agree to the Atlassian Terms of Service , and the Atlassian Privacy Policy . This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply. Visma Net Operational Visma Net Operational Visma Net API Operational Spiris - Bokföring & Fakturering/Visma eAccounting Degraded Performance Spiris - Bokföring & Fakturering/Visma eAccounting Degraded Performance Spiris - Bokföring & Fakturering Externt API/Visma eAccounting External API Operational Spiris - Deklaration & Årsredovisning/Årsbokslut ? Operational Spiris - Skatt & Bokslut Operational Spiris - Koncernredovisning Operational Visma.net Payroll Norway Operational Visma.net Payroll Norway Operational Employee Management Operational Visma.net Calendar Absence Operational Visma.net Calendar Time Operational Visma.net Calendar Roster Operational Data Import Operational HRM Analytics Operational Payroll Norway API Operational Employee Public API Operational Organizations Public API Operational Spiris - Byråstöd/Visma Advisor/Visma Byrå Operational Spiris - Byråstöd/Visma Advisor/Visma Byrå Operational Spiris - Byråstöd Externt API/Visma Advisor External API Operational Spiris - Ekonomiöversikt, Budget & Prognos/Visma Financial Overview/Visma Økonomioversikt Operational Visma Horizon, HoP & Numo Operational Visma Horizon Operational Visma HoP Operational Numo Operational Visma Mobile Apps Operational Visma Attach Operational Visma Scanner Operational Visma Employee Operational Visma Manager Operational Visma Cloud Platform Operational Visma.net ODP Operational Spiris/Visma Online Operational CP - Visma Connect Operational CP - Integration Platform Operational Visma.net Gateway Operational CP - Master Data Management - MDM Operational CP - Visma Licensing Service - VLS Operational CP - Message Router Operational CP - Visma.net Store Operational Transaction Data Broker Operational CP - Common Landing Page Operational Visma Provisioning Gateway - PG3 Operational Flows Operational Visma Developer Portal Operational Visma Webhook Dispatcher Operational Admin UI Operational Common Services Operational CS - Visma.net Approval Operational CS - DocScan Operational Spiris - AutoCollect Operational Maventa / Visma .net AutoInvoice Operational CS - Visma.net AutoPay Operational CS - Visma.net AutoReport Operational CS - Visma WebInvoice / Visma Webfaktura Operational Spiris - Website & Webshop/Visma Website & Webshop Operational Visma Help Centre Operational Spiris - Kreditupplysning Operational Spiris - Storegate Backup Operational CS - Common Image Repository Operational CS - Visma Document Storage Operational CS - Visma Maventa Operational CS - Visma Directe bankkoppeling Operational CS - Visma Bizweb Operational CS - AccountView.Net Operational Spiris - e-fakturering Bokföring & Fakturering/Administration Operational CS - Visma Integratie Service NL Operational Machine Learning Assets Operational AutoInvoice eAccounting / ePasseli NO/FI/NL Operational Scancloud Operational Spiris - Visma Lön 300/600 Operational Payroll Extension NO ? Operational Bizweb Extension for Visma.net ERP NO ? Operational AutoCollect Extension for Visma.net ERP NO ? Operational Visma.net Payslip Operational Visma.net Expense Operational HRM Analytics Operational Visma.net Expense Public Transaction API ? Operational Effortless365 Operational Central Operational Visma Fivaldi & Visma Payroll Finland Operational Visma Fivaldi Operational Visma Fivaldi Service Portal Operational Visma Fivaldi API Operational Visma Payroll Finland Operational Visma Payroll Finland API Operational Flyt Welfare & Health Operational Visma Flyt Samspill Operational Visma Flyt Samspill Public Api Operational Visma Flyt PPT Operational Visma Flyt BVV Operational Visma Flyt Helse Operational Visma Flyt Sikker Sak Operational Visma Assi Operational Visma Flyt Arkiv Operational Flyt Barnevern Operational Visma Flyt Route Planner Operational Flyt Oppvekstadministrativt System (OAS) Operational Flyt Skole Operational Min Skole Operational Flyt Skole API ? Operational Flyt Barnehage Operational Min Barnehage Operational Flyt Foresattportal Operational Flyt Elevportal Operational Flyt Timeplan Operational Visma Severa Operational Visma Severa Operational Visma Netvisor Operational Visma Netvisor Operational Visma Netvisor Integration Operational Movenium & Otta Operational Business NXT Operational Business NXT ERP ? Operational Business NXT ERP API ? Operational Business NXT Provisioning and migration platform Operational Spend Cloud Operational Spend Cloud Operational Enterprise Plus Operational Visma Enterprise BI Operational Enterprise Plus Operational Accounting Services Operational Enterprise Ressursstyring Plus Operational Resolve Operational Operational Route Planner Operational Automatic Rostering Operational Manual Route Planner ? Operational Inventory Optimization Operational Control Edge Operational Control Edge Operational Control Edge API Operational Invoice Manager Operational Control Edge User Management Operational Control Edge File Transfer Operational Visma.net ERP neXtGen Services Operational Visma.net ERP Sales Order Service Operational Visma.net ERP Payment Service Operational Visma.net ERP Sales Order Service UI Operational Visma.net ERP Journal Transaction Service Operational Visma.net ERP DataMart Service Operational Visma.net ERP Inventory Optimization Service Operational Travel Data Operational Travel Data Operational Visma Draftit Major Outage inloggad.draftit.se Operational secure.draftit.se Major Outage identity.draftit.se Major Outage whistleblow.vismadraftit.se Major Outage Visma Private Cloud Operational Visma Private Cloud Operational Visma Veilederen Operational Glup Operational Visma Flyt Ressursstyring Operational Visma Flyt Ressursstyring web application Operational Visma Flyt Ressursstyring API ? Operational OneStop Reporting Operational Pez Operational Visma TransPA Operational Visma TransPA Operational Visma TransPA API Operational PinkWeb Operational CRMHub Operational PDF Signing Operational Middleware integrations Operational TaxPay Operational SBR Assurance Operational Portal Operational Visma.net HRM & Payroll Netherlands Operational Visma.net Payroll Operational Payroll Public API Operational Visma.net Payroll UI NL Operational Visma.net HRM Netherlands (Talent) Operational Visma.net Payroll NL Exports & Reports Operational Visma.net Payroll UI micro service Operational Spiris - Lön Operational Spiris - Lön Operational Spiris - Anställd Operational Spiris - Lön Externt API Operational Payroll Enterprise Sweden (Payroller) ? Operational Visma Talent Solutions Operational Visma EasyCruit Operational Visma Recruit Operational Talent Hire Operational InSchool Operational InSchool Management Information System (MIS) Operational Mobile Application Operational Adult Admissions Operational Adult Portal Operational Archiving Operational Assessments Operational Code-Registry Operational Communications Operational Diploma Operational Document Creator Operational Exams Operational Exams Automation Operational External Exams Portal Operational Student Options Operational SMS Operational Teaching Materials Operational Timetable Construction Operational Timetable Publishing Operational API - FINT Integration Operational API - Microsoft Teams Integration Operational API - Microsoft Graph API Operational Winvoice Operational Visma Extensions NL Operational Visma Extensions NL ? Operational Visma Reporting & Accounting Operational Inyett Operational Inyett AutopayDetect Operational Inyett AutoInsight Operational Inyett One ? Operational Inyett Automation ? Operational IBM COS Operational inFakt Operational Visma Net - Inyett Operational Get company information Operational Visma Tid Operational Visma Tid web application ? Operational Visma Tid API Operational Fikuro FI Operational Core Service API Operational Sales Invoice Management API Operational Sales Offer Management API Operational Sales Order Management API Operational Warehouse Management API Operational Purchase Management API Operational Project Management API Operational Production Management API Operational Sticos Descartes Operational Periode & År Operational PowerOffice Go Operational Beeple ? Operational Beeple EU West 1 Operational Beeple EU West 2 Operational Beeple EU West 3 Operational Beeple EU West 4 Operational Yesplan Integration Operational Belgium Social Security Integration Operational Visma Betalt Direkt Operational Visionplanner Cloud Operational Plandisc Operational MijnRapportfolio Operational MIS & AMS SureSync by Visma ? Operational Loginex API Operational Giosg - MS Dynamics 365 Integrations Operational ionBIZ - Twinfield Integrations Operational WFM - Nedap Ons Integrations Operational InExchange Operational InExchange Operational Hubix Operational Appical Operational Appical API Operational Appical Android Operational Appical iOS Operational Appical Web player Operational Appical Editor Operational House of Control Operational House of Control Website Operational Complete Control, Region - Norway Operational Complete Control, Region - Denmark Operational Complete Control, Region - Cloud Operational Complete Control, Region - UK Operational Business Analyze, Application Operational Buildmate Operational VIPS Benefits Operational Acendy Operational Acendy AI / AI Search Operational Acendy API Operational Acendy BankID Operational Acendy Control panel Operational Acendy Integrations Operational Acendy Klarna Checkout (KCO) Operational Acendy Point of sale (POS) Operational Acendy Vipps Operational Acendy Webshop Operational Visma Amili Operational AMILI MYPAGE - Backend Operational AMILI MYPAGE - Frontend Operational AMILI PORTAL - Authentication Service Operational AMILI PORTAL - Backend Operational AMILI PORTAL - Frontend Operational AMILI PORTAL - Login Operational ARM - Invoicia SOAP API Operational ARM - Webfaktura Operational Astra Operational AUTOCOLLECT - Backend Operational EDI - Elhub Query Operational EDI - General Operational EDI - Mail delivery Operational EDI - Portal Operational EDI - Powel API Operational EDI - SFTP/FTP - transfer.compello.com Operational EDI - Volue API Service Operational EDI - WEB Operational RECEIVE - Receive Service Operational SEND - Access Point Operational SEND - Bank integration NOR - Mastercard Operational SEND - Bankintegration SWE Operational SEND - Digipost Operational SEND - eInvoice API Operational SEND - Elvia API Operational SEND - Enrollment Portal API Operational SEND - Fjordkraft API Operational SEND - Kivra integration Operational SEND - Linkmobility API Operational SEND - MobilePay integration Operational SEND - MPS Enrollment Washer Operational SEND - PassThroughtInvoicing (PTI) API Operational SEND - Peppol Validation Operational SEND - Portal API Operational SEND - Portal API Partner Operational SEND - Print delivery Operational SEND - Processing - invoices Operational SEND - SendInvoice API Operational SEND - SendMail Operational SEND - Sendportal Operational SEND - Sendwebhook Operational SEND - SFTP/FTP - transfer.compello.com Operational SEND - SWE Banks - Invoice lookup Operational SEND - VAX Transfer - mySupply®DK Operational SEND - EXTERNAL - ELMA/Peppol Operational SEND2 - APIs Operational SEND2 - Einvoicepublic Operational SEND2 - Portal Operational SEND2 - SFTP service Operational ACOS Interact Operational Broker by Visma Operational Broker Operational Core API Operational Core Operational Digi Prospekt Operational Spring Operational Wave Operational Webmegler Operational Boligportalen Operational Vieri Operational Vieri Katalog Operational Vieri Bestilling Operational Vieri Leverandorportal Operational Vieri Innsikt Operational Nmbrs Operational Mobile Application Operational Nmbrs Expense Operational Nmbrs Hire Operational Nmbrs Netherlands Web Application (APM) Operational Nmbrs Netherlands Sandbox Operational Nmbrs Netherlands API Operational Nmbrs Perform Operational Nmbrs Sweden Web Application (APM) Operational Nmbrs Sweden Sandbox Operational Nmbrs Sweden API Operational Viskan Operational Viskan API Operational Viskan Backoffice Operational Viskan eCom engine Operational Viskan Automations Operational Viskan Citrix (Legacy) Operational ZorgDomein Operational Fenistra Operational Visma Dataløn Operational Salary Platform Operational OpenAPI Operational Dataløn Application Operational LønAdministration Operational Effectplan Operational Synaxion Data Analytics Platform for Education Operational ZilliZ Berichtenapp ? Operational Tripletex Operational Tripletex Operational M_SOLUTION Operational Tu Recibo Operational Youforce Operational Youforce Aanvragen & uploaden Operational Youforce App Operational Youforce Inloggen Operational Youforce Gebruikersbeheer Operational Youforce Autorisatiebeheer Operational Youforce Berichtenverkeer Operational Youforce Bestandsuitwisseling Operational Youforce Betaalmanager Operational Youforce Community Operational Youforce Declaraties Operational Youforce Digitale handtekening Stand Alone Operational Youforce Domein API's Operational Youforce File API Operational Youforce Flexibele arbeidsvoorwaarden Operational Youforce Gewerkte dagen Operational Youforce Home Operational Youforce HR Core Operational Youforce Klantportaal Operational Youforce Loonaangifte Operational Youforce Opleidingsmanagement Operational Youforce Organisatie & vervanging Operational Youforce Pensioenaangifte APG Operational Youforce Pensioenaangifte UPA Operational Youforce Performance Management Operational Youforce Personeelsdossier Operational Youforce Profiel Operational Youforce Proforma Operational Youforce Rapportages Operational Youforce Salaris Operational Youforce Salaris berekenen Operational Youforce Salarisdossier Operational Youforce Templatebeheer Operational Youforce UWV Digi-ZSM Operational Youforce Verlof Operational Youforce Verzuim Operational Youforce SIVI Verzuim en Dienstverbanden Operational Youforce Verzuim Management (klassiek) Operational Youforce Ziekteregistratie Operational Youforce Toegewezen taken uitvoeren Operational Youforce Medewerker Verzuim Management - Toegang tot eigen re-integratiedossier Operational Youforce Workflows Operational Youforce Workflows Digitale handtekening Operational Youforce HR Cyclus Operational SmartDok Operational SmartDok API Operational SmartDok Byggekortleser Operational SmartDok Integrations Operational SmartDok Mobile Apps Operational SmartDok SmartDok Norway Web (web.smartdok.no) Operational SmartDok SmartDok Sweden Web (web.smartdok.se) Operational SmartDok Website (smartdok.no/se) Operational Operational Degraded Performance Partial Outage Major Outage Maintenance Scheduled Maintenance Scheduled maintenance for Business NXT Jan 13 , 2026 21:00 - Jan 14 , 2026 00:00 CET We will be undergoing scheduled maintenance during this time, which will affect a few customers. Posted on Jan 13 , 2026 - 09:16 CET Scheduled Maintenance - Flyt Common Jan 14 , 2026 07:00 - 08:00 CET On Wednesday, 14th of January, starting 7:00 CET we will undergo a scheduled maintenance of the Flyt Common Application. During this time we expect to have short disruptions for users using https://flyt-ppt.visma.com and https://ppt-intern.visma.com Posted on Jan 13 , 2026 - 09:51 CET Scheduled maintenance for Payroll Norway Jan 14 , 2026 20:00 - Jan 15 , 2026 00:00 CET We will be undergoing scheduled maintenance during this time, the system might be somewhat slower than usual. (xco) Posted on Jan 12 , 2026 - 10:13 CET Scheduled maintenance for Visma Horizon Jan 14 , 2026 21:30 - Jan 15 , 2026 00:00 CET To improve service quality, we will be undergoing scheduled maintenance during this time. Lai uzlabotu pakalpojumu kvalitāti, šajā laikā paredzēti profilakses darbi. Posted on Jan 12 , 2026 - 12:00 CET Scheduled Maintenance - Flyt Common Jan 15 , 2026 07:00 - 08:00 CET On Thursday, 15th of January, starting 7:00 CET we will undergo a scheduled maintenance of the Flyt Common Application. During this time we expect to have short disruptions for users using following apps: https://flyt-bvv.visma.com https://bvv-intern.visma.com https://flyt-helse.visma.com https://flyt-helse-nhn.visma.com Posted on Jan 13 , 2026 - 10:00 CET Scheduled Maintenance - Flyt Common Jan 15 , 2026 18:00 - 22:00 CET On Thursday, 15th of January, starting 18:00 CET we will undergo a scheduled maintenance of the Flyt Common Application. During this time we expect to have up to 10 minutes of downtime. Posted on Dec 19 , 2025 - 15:25 CET Scheduled maintenance for Control Edge and Invoice Manager Jan 16 , 2026 20:00 - Jan 17 , 2026 00:00 CET We will be undergoing scheduled maintenance during this time. Posted on Jan 12 , 2026 - 09:34 CET Planned work Notification from Telia Company with downtime for Sweden DCs Jan 26 , 2026 00:00 - 04:00 CET Monday, 26th of January 2026, starting 00:00 CET, until 04:00 CET, our Internet Service Provider, Telia Company, will have a maintenance window. Downtime: During the maintenance window, there will be 15 minutes of downtime in both Sweden DCs. Contact If you have any questions or concerns, please contact our IT Service Desk. Posted on Jan 06 , 2026 - 14:32 CET Past Incidents Jan 13 , 2026 Unresolved incident: We are experiencing issues importing bank transactions in Spiris Bokföring & Fakturering / Visma eAccounting. Jan 12 , 2026 No incidents reported. Jan 11 , 2026 Enterprise Plus - upgrade the EKS components Completed - The scheduled maintenance has been completed. Jan 11 , 01:00 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Jan 10 , 20:00 CET Scheduled - We will be conducting a scheduled maintenance to improve our system's security and stability. We are updating a storage component in our infrastructure that hosts our web applications. This includes important updates and security improvements provided by the cloud provider. Our web applications will be briefly restarted during this upgrade. This is a necessary step to ensure everything runs smoothly and securely. We apologize for any inconvenience and appreciate your understanding and patience as we work to enhance our services. Jan 8 , 14:26 CET Jan 10 , 2026 Jan 9 , 2026 Scheduled maintenance for Payroll Norway Completed - The scheduled maintenance has been completed. Jan 9 , 00:00 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Jan 8 , 20:00 CET Scheduled - We will be undergoing scheduled maintenance during this time, the system might be somewhat slower than usual. (xco) Dec 19 , 08:24 CET Jan 8 , 2026 Scheduled maintenance for Invoice Manager Completed - The scheduled maintenance has been completed. Jan 8 , 22:01 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Jan 8 , 20:01 CET Scheduled - We will be undergoing scheduled maintenance during this time. Dec 18 , 10:58 CET Scheduled Maintenance - Flyt Common Completed - The scheduled maintenance has been completed. Jan 8 , 22:00 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Jan 8 , 18:00 CET Scheduled - On Thursday, 8th of January, starting 18:00 CET we will undergo a scheduled maintenance of the Flyt Common Application. During this time we expect to have up to 10 minutes of downtime. Dec 19 , 15:19 CET Scheduled maintenance for Flyt Skole production Completed - The scheduled maintenance has been completed. Jan 8 , 21:00 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Jan 8 , 20:02 CET Scheduled - The application will be under maintenance Thursday 8th of January from 20:00 CET for about 1 hour. Service might be unavailable for no more than 5 minutes during this time. Jan 8 , 09:46 CET Scheduled maintenance for Visma Horizon Completed - The scheduled maintenance has been completed. Jan 8 , 00:00 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Jan 7 , 21:30 CET Scheduled - To improve service quality, we will be undergoing scheduled maintenance during this time. Lai uzlabotu pakalpojumu kvalitāti, šajā laikā paredzēti profilakses darbi. Jan 5 , 12:00 CET Jan 7 , 2026 Jan 6 , 2026 Visma HoP outage Resolved - Vakardienas pieejamības problēma ir novērsta un ir ieviesti papildus uzlabojumi. Mēs vēlreiz patiesi atvainojamies par sagādātajām neērtībām. Yesterday's accessibility issue has been resolved and additional improvements have been implemented. We sincerely apologize for the inconvenience this caused. Jan 6 , 12:12 CET Monitoring - Visma HoP atkal ir pieejams, šai problēmai vajadzētu būt atrisinātai visiem klientiem. Mēs turpināsim monitorēt situāciju un rezultātus. Mēs patiesi atvainojamies par sagādātajām neērtībām un novērtējam jūsu pacietību, kamēr mēs strādājām pie labojuma. Visma HoP is up and running again. This issue should be solved for all customers. We will keep monitoring the situation and results. We sincerely apologize for the inconvenience this causes and appreciate your patience while we work on a fix. Jan 5 , 14:56 CET Update - Joprojām saskaramies ar HoP nepieejamības problēmām. Aktīvi strādājam pie problēmas risināšanas. Diemžēl šobrīd ir grūti prognozēt atjaunošanās laiku. We are still experiencing issues with HoP unavailability. We are actively working to resolve the issue. Unfortunately, it is currently difficult to predict when it will be restored. Jan 5 , 13:51 CET Update - Joprojām saskaramies ar HoP nepieejamības problēmām. Aktīvi strādājam pie problēmas risināšanas. Diemžēl šobrīd ir grūti prognozēt atjaunošanās laiku. We are still experiencing issues with HoP unavailability. We are actively working to resolve the issue. Unfortunately, it is currently difficult to predict when it will be restored. Jan 5 , 12:51 CET Update - Joprojām saskaramies ar HoP nepieejamības problēmām. Aktīvi strādājam pie problēmas risināšanas. We are continuing to investigate this issue. Jan 5 , 11:48 CET Update - Joprojām saskaramies ar HoP nepieejamības problēmām. Aktīvi strādājam pie problēmas risināšanas. We are continuing to investigate this issue. Jan 5 , 10:45 CET Update - Joprojām saskaramies ar HoP nepieejamības problēmām. Strādājam pie risinājuma. We are continuing to investigate this issue. Jan 5 , 09:44 CET Investigating - Pašlaik saskaramies ar HoP problēmām. Strādājam pie risinājuma. We are having issues with Visma HoP right now. Working on it. Jan 5 , 08:41 CET Jan 5 , 2026 Jan 4 , 2026 No incidents reported. Jan 3 , 2026 No incidents reported. Jan 2 , 2026 No incidents reported. Jan 1 , 2026 Scheduled maintenance for Visma Horizon Completed - The scheduled maintenance has been completed. Jan 1 , 00:00 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 31 , 21:30 CET Scheduled - To improve service quality, we will be undergoing scheduled maintenance during this time. Lai uzlabotu pakalpojumu kvalitāti, šajā laikā paredzēti profilakses darbi. Dec 29 , 12:00 CET Dec 31 , 2025 Scheduled maintenance for Payroll Norway Completed - The scheduled maintenance has been completed. Dec 31 , 17:35 CET In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary. Dec 31 , 14:35 CET Scheduled - We will be undergoing scheduled maintenance during this time, the system might be somewhat slower than usual. (xco) Dec 31 , 14:31 CET Dec 30 , 2025 Investigating an issue in Business NXT Resolved - The issue is now resolved and the service is up and running as normal. Dec 30 , 12:20 CET Monitoring - The service is up and running again. This issue should be solved for all customers. We will keep monitoring to ensure high quality. Dec 30 , 08:50 CET Investigating - We are currently investigating timeout issues with mutations in the API. Dec 30 , 08:32 CET ← Incident History Powered by Atlassian Statuspage Copyright © Visma Software International AS | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#with-s3-tutorial-test-s3 | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://bugs.php.net/search.php?cmd=display&order_by=ts2&direction=DESC&status=Open&assign=geekcom | PHP :: Bugs :: Search php.net | support | documentation | report a bug | advanced search | search howto | statistics | random bug | login go to bug id or search bugs for No bugs were found. Copyright © 2001-2026 The PHP Group All rights reserved. Last updated: Tue Jan 13 09:00:01 2026 UTC | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/zh_cn/lambda/latest/dg/runtimes-update.html#runtime-management-controls | 了解 Lambda 如何管理运行时版本更新 - AWS Lambda 了解 Lambda 如何管理运行时版本更新 - AWS Lambda 文档 AWS Lambda 开发人员指南 向后兼容性 运行时更新模式 两阶段运行时版本推出 了解 Lambda 如何管理运行时版本更新 Lambda 通过安全更新、错误修复、新功能、性能增强和对次要发行版的支持,使每个托管运行时保持更新。这些运行时更新作为 运行时版本 发布。Lambda 通过将函数从早期运行时版本迁移到新的运行时版本,来对函数应用运行时更新。 对于使用托管式运行时的函数,默认情况下,Lambda 会自动应用运行时更新。借助自动运行时更新,Lambda 免去了修补运行时版本的操作负担。对于大多数客户来说,自动更新是正确的选择。您可以通过 配置运行时管理设置 来更改此默认行为。 Lambda 还将每个新的运行时版本作为容器映像发布。要更新基于容器的函数的运行时版本,您必须从更新后的基本映像 创建一个新的容器映像 并重新部署函数。 每个运行时版本都与版本号和 ARN(Amazon 资源名称)相关联。运行时版本号使用 Lambda 定义的编号方案,独立于编程语言使用的版本号。运行时版本号并不总是连续的。例如,版本 42 后可能是版本 45。运行时版本 ARN 是每个运行时版本的唯一标识符。您可以在 Lambda 控制台或 函数日志 INIT_START 行 中查看函数当前运行时版本的 ARN。 不应将运行时版本与运行时标识符混淆。每个运行时都有唯一的 运行时标识符 ,例如 python3.13 或 nodejs22.x 。它们对应于每个主要的编程语言版本。运行时版本描述单个运行时的补丁版本。 注意 相同运行时版本号的 ARN 可能因 AWS 区域 和 CPU 架构而异。 主题 向后兼容性 运行时更新模式 两阶段运行时版本推出 配置 Lambda 运行时管理设置 回滚 Lambda 运行时版本 识别 Lambda 运行时版本更改 了解 Lambda 运行时管理的责任共担模式 控制高合规性应用程序的 Lambda 运行时更新权限 向后兼容性 Lambda 致力于提供与现有函数向后兼容的运行时更新。但是,与软件修补一样,在极少数情况下,运行时更新会对现有函数产生负面影响。例如,安全性补丁可能会暴露现有函数的潜在问题,而该问题取决于先前的不安全行为。 在构建和部署函数时,重要的是要了解如何管理依赖项,以避免与未来的运行时更新可能出现不兼容。例如,假设您的函数依赖于程序包 A,而程序包 A 又依赖于程序包 B。两个包程序都包含在 Lambda 运行时中(例如,它们可能是 SDK 或其依赖项的一部分,或者是运行时系统库的一部分)。 考虑以下场景: 部署 补丁兼容 Reason 程序包 A: 从运行时使用 程序包 B: 从运行时使用 是 将来对程序包 A 和 B 的运行时更新是向后兼容的。 程序包 A: 在部署包中 程序包 B: 在部署包中 是 您的部署具有优先权,因此未来对程序包 A 和 B 的运行时更新无效。 程序包 A: 在部署包中 程序包 B: 从运行时使用 是* 未来对程序包 B 的运行时更新是向后兼容的。 *如果 A 和 B 紧密耦合,则可能会出现兼容性问题。例如,适用于 Python 的 AWS SDK 中的 boto3 和 botocore 程序包应该一起部署。 程序包 A: 从运行时使用 程序包 B: 在部署包中 否 未来对程序包 A 的运行时更新可能需要程序包 B 的更新版本。但是,程序包 B 的已部署版本优先,并且可能与程序包 A 的更新版本不兼容。 为保持与未来的运行时更新兼容,请遵循以下最佳实践: 如果可能,将所有依赖项打包: 在部署包中包含所有必需的库,包括 AWS SDK 及其依赖项。这样可以确保一组稳定、兼容的组件。 谨慎使用运行时提供的 SDK: 只有在无法包含其他程序包时(例如,在 AWS CloudFormation 模板中使用 Lambda 控制台代码编辑器或内联代码时),才依赖运行时提供的 SDK。 避免覆盖系统库: 不要部署可能与未来的运行时更新冲突的自定义操作系统库。 运行时更新模式 Lambda 致力于提供与现有函数向后兼容的运行时更新。但是,与软件修补一样,在极少数情况下,运行时更新会对现有函数产生负面影响。例如,安全性补丁可能会暴露现有函数的潜在问题,而该问题取决于先前的不安全行为。在运行时版本不兼容的极少数情况下,Lambda 运行时管理控件有助于减少对工作负载造成任何有风险的影响。对于每个 函数版本 ( $LATEST 或已发布版本),您可以选择以下运行时更新模式之一: 自动(默认) – 通过 两阶段运行时版本推出 ,自动更新到最新的安全运行时版本。我们建议大多数客户使用此模式,以便您始终受益于运行时更新。 函数更新 – 当您更新函数时,系统将更新到最新的安全运行时版本。当您更新函数时,Lambda 会将函数的运行时更新为最新的安全运行时版本。这种方法可将运行时更新与函数部署同步,这样您就可以控制 Lambda 应用运行时更新的时间。使用此模式,您可以尽早检测和缓解少数运行时更新不兼容问题。使用此模式时,您必须定期更新函数以保持最新的函数运行时。 手动 – 手动更新您的运行时版本。您需要在函数配置中指定运行时版本。该函数将无限期使用此运行时版本。在极少数情况下,新的运行时版本与现有函数不兼容,您可以使用此模式将函数回滚到早期运行时版本。不建议使用 Manual (手动)模式来尝试实现跨部署的运行时一致性。有关更多信息,请参阅 回滚 Lambda 运行时版本 。 将运行时更新应用于函数的责任因您选择的运行时更新模式而异。有关更多信息,请参阅 了解 Lambda 运行时管理的责任共担模式 。 两阶段运行时版本推出 Lambda 按照以下顺序推出新的运行时版本: 在第一阶段,每当您创建或更新函数时,Lambda 都会应用新的运行时版本。当您调用 UpdateFunctionCode 或 UpdateFunctionConfiguration API 操作时,函数会更新。 在第二阶段,Lambda 会更新任何使用 Auto (自动)运行时更新模式且尚未更新到新运行时版本的函数。 推出过程的总持续时间因多种因素而异,例如运行时更新中包含的任何安全性补丁的严重性。 如果您正在积极开发和部署函数,您很可能会在第一阶段接受新的运行时版本。这可以使运行时更新与函数更新同步。在极少数情况下,最新的运行时版本会对应用程序造成负面影响,而这种方法可让您及时采取纠正措施。未处于积极开发阶段的函数在第二阶段仍能获得自动运行时更新的操作优势。 这种方法不影响设置为 Function update (函数更新)或 Manual (手动)模式的函数。使用 Function update (函数更新)模式的函数只有在创建或更新时才会接收最新的运行时更新。使用 Manual (手动)模式的函数不接收运行时更新。 Lambda 以渐进、滚动的方式跨 AWS 区域 发布新的运行时版本。如果您的函数设置为 Auto (自动)或 Function update (函数更新)模式,则同时部署到不同区域或在同一区域不同时间部署的函数可能会采用不同的运行时版本。需要在其环境中保证运行时系统版本一致性的客户应 使用容器映像部署其 Lambda 函数 。 手动 模式旨在作为临时缓解措施,以便在出现运行时与您的函数不兼容的极少数情况下回滚运行时版本。 Javascript 在您的浏览器中被禁用或不可用。 要使用 Amazon Web Services 文档,必须启用 Javascript。请参阅浏览器的帮助页面以了解相关说明。 文档惯例 Lambda 运行时 配置运行时管理 此页面对您有帮助吗?- 是 感谢您对我们工作的肯定! 如果不耽误您的时间,请告诉我们做得好的地方,让我们做得更好。 此页面对您有帮助吗?- 否 感谢您告诉我们本页内容还需要完善。很抱歉让您失望了。 如果不耽误您的时间,请告诉我们如何改进文档。 | 2026-01-13T09:30:35 |
https://www.php.net/manual/uk/book.bc.php | PHP: BCMath - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box Вступ » « Математичні розширення Посібник з PHP Довідник функцій Математичні розширення Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other BCMath — розрахунки з довільною математичною точністю Вступ Встановлення/налаштування Встановлення Налаштування під час виконання Функції BC Math bcadd — Add two arbitrary precision numbers bcceil — Round up arbitrary precision number bccomp — Compare two arbitrary precision numbers bcdiv — Divide two arbitrary precision numbers bcdivmod — Get the quotient and modulus of an arbitrary precision number bcfloor — Round down arbitrary precision number bcmod — Get modulus of an arbitrary precision number bcmul — Multiply two arbitrary precision numbers bcpow — Raise an arbitrary precision number to another bcpowmod — Raise an arbitrary precision number to another, reduced by a specified modulus bcround — Round arbitrary precision number bcscale — Set or get default scale parameter for all bc math functions bcsqrt — Get the square root of an arbitrary precision number bcsub — Subtract one arbitrary precision number from another Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 3 notes up down 75 Hayley Watson ¶ 10 years ago This extension is an interface to the GNU implementation as a library of the Basic Calculator utility by Philip Nelson; hence the name. up down 22 volek at adamv dot cz ¶ 10 years ago Note that when you use implementation of factorial that ClaudiuS made, you get results even if you try to calculate factorial of number that you normally can't, e.g. 2.5, -2, etc. Here is safer implementation: <?php /** * Calculates a factorial of given number. * @param string|int $num * @throws InvalidArgumentException * @return string */ function bcfact ( $num ) { if (! filter_var ( $num , FILTER_VALIDATE_INT ) || $num <= 0 ) { throw new InvalidArgumentException ( sprintf ( 'Argument must be natural number, "%s" given.' , $num )); } for ( $result = '1' ; $num > 0 ; $num --) { $result = bcmul ( $result , $num ); } return $result ; } ?> up down 20 ClaudiuS ¶ 12 years ago Needed to compute some permutations and found the BC extension great but poor on functions, so untill this gets implemented here's the factorial function: <?php /* BC FACTORIAL * n! = n * (n-1) * (n-2) .. 1 [eg. 5! = 5 * 4 * 3 * 2 * 1 = 120] */ function bcfact ( $n ){ $factorial = $n ; while (-- $n > 1 ) $factorial = bcmul ( $factorial , $n ); return $factorial ; } print bcfact ( 50 ); //30414093201713378043612608166064768844377641568960512000000000000 ?> + add a note Математичні розширення BCMath GMP Math Statistics Trader Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:35 |
https://www.php.net/manual/uk/function.mailparse-uudecode-all.php | PHP: mailparse_uudecode_all - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box Математичні розширення » « mailparse_stream_encode Посібник з PHP Довідник функцій Розширення для роботи з поштою Mailparse Функції Mailparse Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other mailparse_uudecode_all (PECL mailparse >= 0.9.0) mailparse_uudecode_all — Scans the data from fp and extract each embedded uuencoded file Опис mailparse_uudecode_all ( resource $fp ): array Scans the data from the given file pointer and extract each embedded uuencoded file into a temporary file. Параметри fp A valid file pointer. Значення, що повертаються Returns an array of associative arrays listing filename information. filename Path to the temporary file name created origfilename The original filename, for uuencoded parts only The first filename entry is the message body. The next entries are the decoded uuencoded files. Приклади Приклад #1 mailparse_uudecode_all() example <?php $text = <<<EOD To: fred@example.com hello, this is some text hello. blah blah blah. begin 644 test.txt /=&AI<R!I<R!A('1E<W0* ` end EOD; $fp = tmpfile (); fwrite ( $fp , $text ); $data = mailparse_uudecode_all ( $fp ); echo "BODY\n" ; readfile ( $data [ 0 ][ "filename" ]); echo "UUE ( { $data [ 1 ][ 'origfilename' ]} )\n" ; readfile ( $data [ 1 ][ "filename" ]); // Clean up unlink ( $data [ 0 ][ "filename" ]); unlink ( $data [ 1 ][ "filename" ]); ?> Поданий вище приклад виведе: BODY To: fred@example.com hello, this is some text hello. blah blah blah. UUE (test.txt) this is a test Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 1 note up down 0 mat at phpconsulting dot com ¶ 22 years ago As an alternative, uudecode() can be called as static function as follows: $file =& Mail_mimeDecode::uudecode($some_text); This will return the following arrays: @param string Input body to look for attachments in @return array Decoded bodies, filenames and permissions + add a note Функції Mailparse mailparse_​determine_​best_​xfer_​encoding mailparse_​msg_​create mailparse_​msg_​extract_​part mailparse_​msg_​extract_​part_​file mailparse_​msg_​extract_​whole_​part_​file mailparse_​msg_​free mailparse_​msg_​get_​part mailparse_​msg_​get_​part_​data mailparse_​msg_​get_​structure mailparse_​msg_​parse mailparse_​msg_​parse_​file mailparse_​rfc822_​parse_​addresses mailparse_​stream_​encode mailparse_​uudecode_​all Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:35 |
https://www.php.net/manual/uk/book.bzip2.php | PHP: Bzip2 - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box Вступ » « Розширення для архівації та стиснення Посібник з PHP Довідник функцій Розширення для архівації та стиснення Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other Bzip2 Вступ Встановлення/налаштування Вимоги Встановлення Типи ресурсів Приклади Функції Bzip2 bzclose — Close a bzip2 file bzcompress — Compress a string into bzip2 encoded data bzdecompress — Decompresses bzip2 encoded data bzerrno — Returns a bzip2 error number bzerror — Returns the bzip2 error number and error string in an array bzerrstr — Returns a bzip2 error string bzflush — Do nothing bzopen — Opens a bzip2 compressed file bzread — Binary safe bzip2 file read bzwrite — Binary safe bzip2 file write Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes There are no user contributed notes for this page. Розширення для архівації та стиснення Bzip2 LZF Phar Rar Zip Zlib Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:35 |
https://www.php.net/manual/ja/book.bc.php | PHP: BC Math - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box はじめに » « 数学 PHP マニュアル 関数リファレンス 数学 Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other BCMath 任意精度数学関数 はじめに インストール/設定 インストール手順 実行時設定 BC Math 関数 bcadd — 2つの任意精度の数値を加算する bcceil — 任意精度数値を切り上げる bccomp — 2 つの任意精度数値を比較する bcdiv — 2つの任意精度数値で除算を行う bcdivmod — 任意精度数値の商と剰余を取得する bcfloor — 任意精度数値を切り下げる bcmod — 2 つの任意精度数値の剰余を取得する bcmul — 2つの任意精度数値の乗算を行う bcpow — 任意精度数値をべき乗する bcpowmod — 任意精度数値のべき乗の、指定した数値による剰余 bcround — 任意精度数値を丸める bcscale — すべての BC 演算関数におけるデフォルトのスケールを設定/取得する bcsqrt — 任意精度数値の平方根を取得する bcsub — 任意精度数値の減算を行う BcMath\Number — BcMath\Number クラス BcMath\Number::add — 任意精度の数値を加算する BcMath\Number::ceil — 任意精度数値を切り上げる BcMath\Number::compare — 任意精度数値を比較する BcMath\Number::__construct — BcMath\Number オブジェクトを作成する BcMath\Number::div — 任意精度数値の除算を行う BcMath\Number::divmod — 任意精度数値の商と剰余を取得する BcMath\Number::floor — 任意精度数値を切り下げる BcMath\Number::mod — 任意精度数値の剰余を取得する BcMath\Number::mul — 任意精度数値の乗算を行う BcMath\Number::pow — 任意精度数値をべき乗する BcMath\Number::powmod — 任意精度数値のべき乗の、指定した数値による剰余 BcMath\Number::round — 任意精度数値を丸める BcMath\Number::__serialize — BcMath\Number オブジェクトをシリアライズする BcMath\Number::sqrt — 任意精度数値の平方根を取得する BcMath\Number::sub — 任意精度数値の減算を行う BcMath\Number::__toString — BcMath\Number オブジェクトを文字列に変換する BcMath\Number::__unserialize — data 引数を BcMath\Number オブジェクトとして復元する Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 3 notes up down 75 Hayley Watson ¶ 10 years ago This extension is an interface to the GNU implementation as a library of the Basic Calculator utility by Philip Nelson; hence the name. up down 22 volek at adamv dot cz ¶ 10 years ago Note that when you use implementation of factorial that ClaudiuS made, you get results even if you try to calculate factorial of number that you normally can't, e.g. 2.5, -2, etc. Here is safer implementation: <?php /** * Calculates a factorial of given number. * @param string|int $num * @throws InvalidArgumentException * @return string */ function bcfact ( $num ) { if (! filter_var ( $num , FILTER_VALIDATE_INT ) || $num <= 0 ) { throw new InvalidArgumentException ( sprintf ( 'Argument must be natural number, "%s" given.' , $num )); } for ( $result = '1' ; $num > 0 ; $num --) { $result = bcmul ( $result , $num ); } return $result ; } ?> up down 20 ClaudiuS ¶ 12 years ago Needed to compute some permutations and found the BC extension great but poor on functions, so untill this gets implemented here's the factorial function: <?php /* BC FACTORIAL * n! = n * (n-1) * (n-2) .. 1 [eg. 5! = 5 * 4 * 3 * 2 * 1 = 120] */ function bcfact ( $n ){ $factorial = $n ; while (-- $n > 1 ) $factorial = bcmul ( $factorial , $n ); return $factorial ; } print bcfact ( 50 ); //30414093201713378043612608166064768844377641568960512000000000000 ?> + add a note 数学 BC Math GMP Math Statistics Trader Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:35 |
https://www.php.net/manual/uk/function.readline-write-history.php | PHP: readline_write_history - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box Розширення для архівації та стиснення » « readline_redisplay Посібник з PHP Довідник функцій Розширення для командного рядка Readline Readline Функції Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other readline_write_history (PHP 4, PHP 5, PHP 7, PHP 8) readline_write_history — Writes the history Опис readline_write_history ( ? string $filename = null ): bool This function writes the command history to a file. Параметри filename Path to the saved file. Значення, що повертаються Повертає true у разі успіху або false в разі помилки. Журнал змін Версія Опис 8.0.0 filename is nullable now. Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 1 note up down 3 jonathan dot gotti at free dot fr ¶ 19 years ago readline_write_history() doesn't take care of the $_SERVER['HISTSIZE'] value, here's an example on how to handle an history file in your apps taking care of user preferences regarding history size. at the begining of your script: <?php $history_file = $_SERVER [ 'HOME' ]. '/.PHPinteractive_history' ; # read history from previous session if( is_file ( $history_file )) readline_read_history ( $history_file ); .... # your application's code .... # put this at the end of yur script to save history and take care of $_SERVER['HISTSIZE'] if( readline_write_history ( $history_file ) ){ # clean history if too long $hist = readline_list_history (); if( ( $histsize = count ( $hist )) > $_SERVER [ 'HISTSIZE' ] ){ $hist = array_slice ( $hist , $histsize - $_SERVER [ 'HISTSIZE' ]); # in php5 you can replaces thoose line with a file_puts_content() if( $fhist = fopen ( $history_file , 'w' ) ){ fwrite ( $fhist , implode ( "\n" , $hist )); fclose ( $fhist ); } } } ?> + add a note Readline Функції readline readline_​add_​history readline_​callback_​handler_​install readline_​callback_​handler_​remove readline_​callback_​read_​char readline_​clear_​history readline_​completion_​function readline_​info readline_​list_​history readline_​on_​new_​line readline_​read_​history readline_​redisplay readline_​write_​history Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/it_it/lambda/latest/dg/with-s3-example.html | Tutorial: uso di un trigger Amazon S3 per richiamare una funzione Lambda - AWS Lambda Tutorial: uso di un trigger Amazon S3 per richiamare una funzione Lambda - AWS Lambda Documentazione AWS Lambda Guida per gli sviluppatori Crea un bucket Amazon S3 Caricamento di un oggetto di test in un bucket Creazione di una policy di autorizzazione Creazione di un ruolo di esecuzione Creazione della funzione Lambda Implementazione del codice della funzione Creazione del trigger Amazon S3 Test della funzione Lambda Pulizia delle risorse Fasi successive Le traduzioni sono generate tramite traduzione automatica. In caso di conflitto tra il contenuto di una traduzione e la versione originale in Inglese, quest'ultima prevarrà. Tutorial: uso di un trigger Amazon S3 per richiamare una funzione Lambda In questo tutorial si utilizzerà la console per creare una funzione Lambda e configurare un trigger per un bucket Amazon Simple Storage Service (Amazon S3). Ogni volta che aggiungi un oggetto al tuo bucket Amazon S3, la funzione viene eseguita e invia il tipo di oggetto in Amazon Logs. CloudWatch Questo tutorial dimostra come: Crea un bucket Amazon S3. Crea una funzione Lambda che restituisce il tipo di oggetto in un bucket Amazon S3. Configura un trigger Lambda che richiami la tua funzione quando gli oggetti vengono caricati nel bucket. Prova la tua funzione, prima con un evento fittizio e poi usando il trigger. Completando questi passaggi, imparerai come configurare una funzione Lambda da eseguire ogni volta che vengono aggiunti o eliminati oggetti da un bucket Amazon S3. Puoi completare questo tutorial soltanto dalla Console di gestione AWS. Crea un bucket Amazon S3 Come creare un bucket Amazon S3. Apri la console Amazon S3 e seleziona la pagina General Purpose bucket . Seleziona quello più Regione AWS vicino alla tua posizione geografica. Puoi modificare la regione utilizzando l'elenco a discesa nella parte superiore dello schermo. Più avanti nel tutorial, è necessario creare la funzione Lambda nella stessa regione. Scegliere Create bucket (Crea bucket) . In General configuration (Configurazione generale) , eseguire le operazioni seguenti: Per Tipo di secchio , assicurati che sia selezionata l' opzione Uso generale . Per Nome del bucket , inserisci un nome univoco globale che soddisfi le regole di denominazione dei bucket di Amazon S3. I nomi dei bucket possono contenere solo lettere minuscole, numeri, punti (.) e trattini (-). Lascia tutte le altre opzioni impostate sui valori predefiniti e scegli Crea bucket . Caricamento di un oggetto di test in un bucket Caricamento di un oggetto di test Apri la pagina Bucket della console Amazon S3 e scegli il bucket che hai creato durante il passaggio precedente. Scegli Carica . Scegli Aggiungi file e seleziona l'oggetto da caricare. È possibile selezionare qualsiasi file (ad esempio, HappyFace.jpg ). Seleziona Apri , quindi Carica . Più avanti nel tutorial, testerai la funzione Lambda utilizzando questo oggetto. Creazione di una policy di autorizzazione Crea una politica di autorizzazioni che consenta a Lambda di ottenere oggetti da un bucket Amazon S3 e di scriverli su Amazon Logs. CloudWatch Come creare la policy Apri la pagina Policies (Policy) nella console IAM. Scegli Crea policy . Scegliere la scheda JSON e quindi incollare la seguente policy personalizzata nell'editor JSON. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Scegliere Next: Tags (Successivo: Tag) . Scegliere Next:Review (Successivo: Rivedi) . In Rivedi policy , per Nome della policy inserisci s3-trigger-tutorial . Scegli Crea policy . Creazione di un ruolo di esecuzione Un ruolo di esecuzione è un ruolo AWS Identity and Access Management (IAM) che concede a una funzione Lambda l'autorizzazione all' Servizi AWS accesso e alle risorse. In questa fase, creare un ruolo di esecuzione utilizzando la policy di autorizzazioni creata nel passaggio precedente. Creazione di un ruolo di esecuzione e collegamento di una policy di autorizzazione personalizzata Aprire la pagina Roles (Ruoli) della console IAM. Scegli Crea ruolo . Per il tipo di entità attendibile, scegli Servizio AWS , quindi per il caso d'uso seleziona Lambda . Scegli Next (Successivo) . Nella casella di ricerca delle policy, immettere s3-trigger-tutorial . Nei risultati della ricerca, seleziona la policy creata ( s3-trigger-tutorial ), quindi scegli Next (Successivo). In Role details (Dettagli del ruolo), per Role name (Nome del ruolo), specifica lambda-s3-trigger-role , quindi scegli Create role (Crea ruolo). Creazione della funzione Lambda Crea una funzione Lambda nella console usando il runtime Python 3.14. Creazione della funzione Lambda Aprire la pagina Funzioni della console Lambda. Assicurati di lavorare nello stesso bucket in Regione AWS cui hai creato il tuo bucket Amazon S3. Puoi modificare la regione utilizzando l'elenco a discesa nella parte superiore dello schermo. Scegli Crea funzione . Scegli Crea da zero . In Basic information (Informazioni di base) eseguire queste operazioni: Nel campo Nome funzione , inserisci s3-trigger-tutorial . Per Runtime , scegli Python 3.14 . In Architecture (Architettura), scegli x86_64 . Nella scheda Modifica ruolo di esecuzione predefinito , effettua le seguenti operazioni: Espandi la scheda, quindi scegli Utilizza un ruolo esistente . Seleziona il lambda-s3-trigger-role che hai creato in precedenza. Scegli Crea funzione . Implementazione del codice della funzione Questo tutorial utilizza il runtime di Python 3.14, ma abbiamo anche fornito file di codice di esempio per altri runtime. Per visualizzare il codice per il runtime che ti interessa, seleziona la scheda corrispondente nella casella seguente. La funzione Lambda recupera il nome della chiave dell'oggetto caricato e il nome del bucket dal parametro event che riceve da Amazon S3. La funzione utilizza quindi il metodo get_object di AWS SDK per Python (Boto3) per recuperare i metadati dell'oggetto, incluso il tipo di contenuto (tipo MIME) dell'oggetto caricato. Implementazione del codice della funzione Scegli la scheda Python nella casella seguente e copia il codice. .NET SDK per .NET Nota C'è altro su. GitHub Trova l'esempio completo e scopri come eseguire la configurazione e l'esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK per Go V2 Nota C'è dell'altro GitHub. Trova l'esempio completo e scopri come eseguire la configurazione e l'esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK per Java 2.x Nota C'è dell'altro GitHub. Trova l'esempio completo e scopri come eseguire la configurazione e l'esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK per JavaScript (v3) Nota C'è altro da fare. GitHub Trova l’esempio completo e scopri come eseguire la configurazione e l’esecuzione nel repository di Esempi serverless . Consumo di un evento S3 con JavaScript Lambda utilizzando. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consumo di un evento S3 con TypeScript Lambda utilizzando. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK per PHP Nota C'è altro da fare. GitHub Trova l’esempio completo e scopri come eseguire la configurazione e l’esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK per Python (Boto3) Nota C'è dell'altro GitHub. Trova l'esempio completo e scopri come eseguire la configurazione e l'esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK per Ruby Nota C'è dell'altro GitHub. Trova l’esempio completo e scopri come eseguire la configurazione e l’esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK per Rust Nota C'è altro da fare. GitHub Trova l'esempio completo e scopri come eseguire la configurazione e l'esecuzione nel repository di Esempi serverless . Utilizzo di un evento S3 con Lambda tramite Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } Nel riquadro Codice sorgente della console Lambda, incolla il codice nell'editor di codice, sostituendo il codice creato da Lambda. Nella sezione DEPLOY , scegli Implementa per aggiornare il codice della tua funzione: Creazione del trigger Amazon S3 Creazione del trigger Amazon S3 Nel riquadro Panoramica della funzione , scegli Aggiungi trigger . Seleziona S3 . In Bucket , seleziona il bucket che hai creato in precedenza nel tutorial. In Tipi di eventi , verifica che sia selezionata l'opzione Tutti gli eventi di creazione di oggetti . In Invocazione ricorsiva , seleziona la casella di controllo per confermare che non è consigliabile utilizzare lo stesso bucket Amazon S3 per input e output. Scegliere Aggiungi . Nota Quando crei un trigger Amazon S3 per una funzione Lambda utilizzando la console Lambda, Amazon S3 configura una notifica degli eventi sul bucket specificato. Prima di configurare questa notifica di evento, Amazon S3 esegue una serie di controlli per confermare che la destinazione dell'evento esista e disponga delle policy IAM richieste. Amazon S3 esegue questi test anche su qualsiasi altra notifica di eventi configurata per quel bucket. A causa di questo controllo, se il bucket ha precedentemente configurato destinazioni di eventi per risorse che non esistono più o per risorse che non dispongono delle policy di autorizzazione richieste, Amazon S3 non sarà in grado di creare la nuova notifica di evento. Verrà visualizzato il seguente messaggio di errore che indica che non è stato possibile creare il trigger: An error occurred when creating the trigger: Unable to validate the following destination configurations. Puoi visualizzare questo errore se in precedenza hai configurato un trigger per un'altra funzione Lambda utilizzando lo stesso bucket e da allora hai eliminato la funzione o modificato le sue policy di autorizzazioni. Test di una funzione Lambda con un evento fittizio Test della funzione Lambda con un evento fittizio Nella pagina della console Lambda della funzione, seleziona la scheda Test . Per Event name (Nome evento) immettere MyTestEvent . Nella sezione JSON dell'evento , incolla il seguente evento di test. Assicurati di sostituire i seguenti valori: Sostituisci us-east-1 con la regione in cui è stato creato il bucket Amazon S3. Sostituisci entrambe le istanze di amzn-s3-demo-bucket con il nome del bucket Amazon S3. Sostituisci test%2FKey con il nome dell'oggetto di test che hai caricato in precedenza nel tuo bucket (ad esempio, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Scegli Save (Salva). Scegli Test (Esegui test) . Se la tua funzione viene eseguita correttamente, vedrai un output simile al seguente nella scheda Risultati dell'esecuzione . Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test della funzione Lambda mediante il trigger Amazon S3 Per verificare la funzione con il trigger configurato, carica un oggetto sul bucket Amazon S3 utilizzando la console. Per verificare che la funzione Lambda sia stata eseguita come previsto, usa CloudWatch Logs per visualizzare l'output della funzione. Caricamento di un oggetto nel bucket Amazon S3 Apri la pagina Bucket della console Amazon S3 e scegli il bucket che hai creato in precedenza. Scegli Carica . Scegli Aggiungi file e usa il selettore di file per scegliere un oggetto da caricare. Tale oggetto può essere qualsiasi file desideri. Seleziona Apri , quindi Carica . Per verificare l'invocazione della funzione utilizzando Logs CloudWatch Aprire la console CloudWatch . Assicurati di lavorare nella stessa modalità in Regione AWS cui hai creato la funzione Lambda. Puoi modificare la regione utilizzando l'elenco a discesa nella parte superiore dello schermo. Scegli Log e quindi Gruppi di log . Scegli il nome del gruppo di log per la funzione ( /aws/lambda/s3-trigger-tutorial ). In Flussi di log , scegli il flusso di log più recente. Se la tua funzione è stata richiamata correttamente in risposta al trigger Amazon S3, vedrai un output simile al seguente. Il CONTENT TYPE visualizzato dipende dal tipo di file che hai caricato nel bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Pulizia delle risorse Ora è possibile eliminare le risorse create per questo tutorial, a meno che non si voglia conservarle. Eliminando AWS le risorse che non utilizzi più, eviti addebiti inutili a tuo carico. Account AWS Per eliminare la funzione Lambda Aprire la pagina Functions (Funzioni) della console Lambda. Selezionare la funzione creata. Scegliere Operazioni , Elimina . Inserisci confirm nel campo di immissione del testo, quindi scegli Elimina . Come eliminare il ruolo di esecuzione Aprire la pagina Ruoli della console IAM. Selezionare il ruolo di esecuzione creato. Scegliere Elimina . Inserisci il nome del ruolo nel campo di immissione testo e seleziona Delete (Elimina). Per eliminare il bucket S3 Aprire la console Amazon S3 . Selezionare il bucket creato in precedenza. Scegli Elimina . Inserisci il nome del bucket nel campo di immissione testo. Scegli Delete Bucket (Elimina bucket). Fasi successive In Tutorial: uso di un trigger Amazon S3 per creare immagini in miniatura , il trigger Amazon S3 richiama una funzione per creare un'immagine in miniatura per ogni file immagine caricato nel bucket. Questo tutorial richiede un livello moderato di conoscenza del AWS dominio Lambda. Dimostra come creare risorse utilizzando AWS Command Line Interface (AWS CLI) e come creare un pacchetto di distribuzione di archivi di file.zip per la funzione e le sue dipendenze. JavaScript è disabilitato o non è disponibile nel tuo browser. Per usare la documentazione AWS, JavaScript deve essere abilitato. Consulta le pagine della guida del browser per le istruzioni. Convenzioni dei documenti S3 Tutorial: Uso di un trigger Amazon S3 per creare miniature Questa pagina ti è stata utile? - Sì Grazie per averci comunicato che stiamo facendo un buon lavoro! Se hai un momento, ti invitiamo a dirci che cosa abbiamo fatto che ti è piaciuto così possiamo offrirti altri contenuti simili. Questa pagina ti è stata utile? - No Grazie per averci comunicato che questa pagina ha bisogno di essere modificata. Siamo spiacenti di non aver soddisfatto le tue esigenze. Se hai un momento, ti invitiamo a dirci come possiamo migliorare la documentazione. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/lambda/latest/dg/with-s3-example.html | Tutorial: Uso de un desencadenador de Amazon S3 para invocar una función de Lambda - AWS Lambda Tutorial: Uso de un desencadenador de Amazon S3 para invocar una función de Lambda - AWS Lambda Documentación AWS Lambda Guía para desarrolladores Crear un bucket de Amazon S3 Cargar un objeto de prueba en un bucket Creación de una política de permisos Creación de un rol de ejecución Crear la función de Lambda Implementar el código de la función Cree un desencadenador de Amazon S3 Probar la función de Lambda Eliminación de sus recursos Pasos a seguir a continuación Tutorial: Uso de un desencadenador de Amazon S3 para invocar una función de Lambda En este tutorial, se utiliza la consola a fin de crear una función de Lambda y configurar un desencadenador para un bucket de Amazon Simple Storage Service (Amazon S3). Cada vez que agrega un objeto al bucket de Amazon S3, la función se ejecuta y muestra el tipo de objeto en Registros de Amazon CloudWatch. En este tutorial se muestra cómo: Cree un bucket de Amazon S3. Cree una función de Lambda que devuelva el tipo de objeto de los objetos en un bucket de Amazon S3. Configure un desencadenador de Lambda que invoque su función cuando se carguen objetos en su bucket. Pruebe su función, primero con un evento de prueba y, a continuación, con el desencadenador. Al completar estos pasos, aprenderá a configurar una función de Lambda para que se ejecute siempre que se agreguen objetos a un bucket de Amazon S3 o se eliminen de él. Solo puede completar este tutorial mediante la Consola de administración de AWS. Crear un bucket de Amazon S3 Creación de un bucket de Amazon S3 Abra la consola de Amazon S3 y seleccione la página Buckets de uso general . Seleccione la Región de AWS más cercana a su ubicación geográfica. Puede cambiar la región por medio de la lista desplegable de la parte superior de la pantalla. Más adelante en el tutorial, debe crear la función de Lambda en la misma región. Elija Crear bucket . En Configuración general , haga lo siguiente: En Tipo de bucket , asegúrese de que Uso general está seleccionado. Para el nombre del bucket , ingrese un nombre único a nivel mundial que cumpla las reglas de nomenclatura de bucket de Amazon S3. Los nombres de bucket pueden contener únicamente letras minúsculas, números, puntos (.) y guiones (-). Deje el resto de las opciones con sus valores predeterminados y seleccione Crear bucket . Cargar un objeto de prueba en un bucket Para cargar un objeto de prueba Abra la página Buckets de la consola de Amazon S3 y elija el bucket que creó durante el paso anterior. Seleccione Cargar . Elija Agregar archivos y seleccione el objeto que desea cargar. Puede seleccionar cualquier archivo (por ejemplo, HappyFace.jpg ). Elija Abrir y, a continuación, Cargar . Más adelante en el tutorial, probará la función de Lambda con este objeto. Creación de una política de permisos Cree una política de permisos que le permita a Lambda obtener objetos de un bucket de Amazon S3 y escribir en los Registros de Amazon CloudWatch. Para crear la política de Abra la página de Policies (Políticas) de la consola de IAM. Elija Crear política . Elija la pestaña JSON y pegue la siguiente política personalizada en el editor JSON. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Elija Siguiente: Etiquetas . Elija Siguiente: Revisar . En Review policy (Revisar política) , para el Name (Nombre) de la política, ingrese s3-trigger-tutorial . Seleccione Crear política . Creación de un rol de ejecución Un rol de ejecución es un rol de AWS Identity and Access Management (IAM) que concede a la función de Lambda permiso para acceder a recursos y Servicios de AWS. En este paso, creará un rol de ejecución mediante la política de permisos que creó en el paso anterior. Para crear una función de ejecución y adjuntar su política de permisos personalizada Abra la página Roles en la consola de IAM. Elija Create role . Para el tipo de entidad de confianza, seleccione Servicio de AWS y, para el caso de uso, elija Lambda . Elija Siguiente . En el cuadro de búsqueda de políticas, escriba s3-trigger-tutorial . En los resultados de búsqueda, seleccione la política que ha creado ( s3-trigger-tutorial ), y luego Next (Siguiente). En Role details (Detalles del rol), introduzca lambda-s3-trigger-role en Role name (Nombre del rol) y, luego, elija Create role (Crear rol). Crear la función de Lambda Cree una función de Lambda en la consola con el tiempo de ejecución de Python 3.13. Para crear la función de Lambda Abra la página de Funciones en la consola de Lambda. Asegúrese de trabajar en la misma Región de AWS en la que creó el bucket de Amazon S3. Puede cambiar la región mediante la lista desplegable de la parte superior de la pantalla. Seleccione Creación de función . Elija Crear desde cero . Bajo Información básica , haga lo siguiente: En Nombre de la función , ingrese s3-trigger-tutorial . En Tiempo de ejecución , seleccione Python 3.12 . En Arquitectura , elija x86_64 . En la pestaña Cambiar rol de ejecución predeterminado , haga lo siguiente: Amplíe la pestaña y, a continuación, elija Utilizar un rol existente . Seleccione el lambda-s3-trigger-role que creó anteriormente. Seleccione Creación de función . Implementar el código de la función En este tutorial, se utiliza el tiempo de ejecución de Python 3.13, pero también proporcionamos archivos de código de ejemplo para otros tiempos de ejecución. Puede seleccionar la pestaña del siguiente cuadro para ver el código del tiempo de ejecución que le interesa. La función de Lambda recuperará el nombre de clave del objeto cargado y el nombre del bucket desde el parámetro event que recibe de Amazon S3. A continuación, la función utiliza el método get_object de AWS SDK para Python (Boto3) para recuperar los metadatos del objeto, incluido el tipo de contenido (tipo MIME) del objeto cargado. Para implementar el código de la función Seleccione la pestaña Python en el siguiente cuadro y copie el código. .NET SDK para .NET nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK para Go V2 nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK para Java 2.x nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK para JavaScript (v3) nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Uso de un evento de S3 con Lambda mediante TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK para PHP nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos de tecnología sin servidor . Consumo de un evento de S3 con Lambda mediante PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK para Python (Boto3) nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK para Ruby nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos de tecnología sin servidor . Consumo de un evento de S3 con Lambda mediante Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK para Rust nota Hay más en GitHub. Busque el ejemplo completo y aprenda a configurar y ejecutar en el repositorio de ejemplos sin servidor . Uso de un evento de S3 con Lambda mediante Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } En el panel Código fuente de la consola de Lambda, pegue el siguiente código en el editor de código y sustituya el código creado por Lambda. En la sección IMPLEMENTAR elija Implementar para actualizar el código de la función: Cree un desencadenador de Amazon S3 Para crear el desencadenador de Amazon S3 En el panel Información general de la función , elija Agregar desencadenador . Seleccione S3 . En Bucket , seleccione el bucket que creó anteriormente en el tutorial. En Tipos de eventos , asegúrese de seleccionar Todos los eventos de creación de objetos . En Invocación recursiva , marque la casilla de verificación para confirmar que no se recomienda utilizar el mismo bucket de Amazon S3 para la entrada y la salida. Elija Agregar . nota Cuando crea un desencadenador de Amazon S3 para una función de Lambda mediante la consola de Lambda, Amazon S3 configura una notificación de eventos en el bucket que especifique. Antes de configurar esta notificación de evento, Amazon S3 realiza una serie de comprobaciones para confirmar que el destino del evento existe y tiene las políticas de IAM requeridas. Amazon S3 también realiza estas pruebas en cualquier otra notificación de eventos configurada para ese bucket. Gracias a esta comprobación, si el bucket ha configurado previamente destinos de eventos para recursos que ya no existen o para recursos que no tienen las políticas de permisos requeridas, Amazon S3 no podrá crear la nueva notificación de evento. Verá el siguiente mensaje de error que indica que no se ha podido crear el desencadenador: An error occurred when creating the trigger: Unable to validate the following destination configurations. Puede ver este error si anteriormente configuró un desencadenador para otra función de Lambda con el mismo bucket y, desde entonces, ha eliminado la función o modificado sus políticas de permisos. Probar la función de Lambda con un evento de prueba Para probar la función de Lambda con un evento de prueba En la página de la consola de Lambda para la función, seleccione la pestaña Prueba . En Nombre del evento , escriba MyTestEvent . En el Evento JSON , pegue el siguiente evento de prueba. Asegúrese de reemplazar los siguientes valores: Reemplace us-east-1 por la región en la que creó el bucket de Amazon S3. Reemplace ambas instancias de amzn-s3-demo-bucket por el nombre de su bucket de Amazon S3. Reemplace test%2FKey por el nombre del objeto de prueba que cargó anteriormente al bucket (por ejemplo, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Seleccione Save . Seleccione Test (Probar) . Si la función se ejecuta correctamente, verá un resultado similar al siguiente en la pestaña Resultados de ejecución . Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Probar la función de Lambda con el desencadenador de Amazon S3 Para probar la función con el desencadenador configurado, se carga un objeto al bucket de Amazon S3 mediante la consola. Para comprobar que la función de Lambda se ha ejecutado correctamente, utilice Registros de CloudWatch para ver la salida de la función. Para cargar un objeto en un bucket de Amazon S3 Abra la página Buckets de la consola de Amazon S3 y elija el nombre del bucket que creó anteriormente. Seleccione Cargar . Elija Agregar archivos y utilice el selector de archivos para elegir el objeto que desee cargar. Este objeto puede ser cualquier archivo que elija. Elija Abrir y, a continuación, Cargar . Comprobación de la invocación de la función mediante Registros de CloudWatch Abra la consola de CloudWatch . Asegúrese de trabajar en la misma Región de AWS en la que creó la función de Lambda. Puede cambiar la región mediante la lista desplegable de la parte superior de la pantalla. Elija Registros y, a continuación, Grupos de registro . Elija el grupo de registro para la función ( /aws/lambda/s3-trigger-tutorial ). En Flujos de registro , elija el flujo de registro más reciente. Si su función se ha invocado correctamente en respuesta a su desencadenador de Amazon S3, verá un resultado similar al siguiente. El CONTENT TYPE que vea depende del tipo de archivo que haya subido a su bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Eliminación de sus recursos A menos que desee conservar los recursos que creó para este tutorial, puede eliminarlos ahora. Si elimina los recursos de AWS que ya no utiliza, evitará gastos innecesarios en su Cuenta de AWS. Cómo eliminar la función de Lambda Abra la página de Funciones en la consola de Lambda. Seleccione la función que ha creado. Elija Acciones , Eliminar . Escriba confirm en el campo de entrada de texto y elija Delete (Eliminar). Cómo eliminar el rol de ejecución Abra la página Roles en la consola de IAM. Seleccione el rol de ejecución que creó. Elija Eliminar . Si desea continuar, escriba el nombre del rol en el campo de entrada de texto y elija Delete (Eliminar). Para eliminar el bucket de S3 Abra la consola de Amazon S3 . Seleccione el bucket que ha creado. Elija Eliminar . Introduzca el nombre del bucket en el campo de entrada de texto. Elija Delete bucket (Eliminar bucket) . Pasos a seguir a continuación En Tutorial: Uso de un desencadenador de Amazon S3 para crear imágenes en miniatura , el desencadenador de Amazon S3 invoca una función que crea una imagen en miniatura para cada archivo de imagen que se carga en el bucket. Este tutorial requiere un nivel moderado de conocimiento del dominio de AWS y Lambda. Demuestra cómo crear recursos usando la AWS Command Line Interface (AWS CLI) y cómo crear un paquete de implementación de archivo .zip para la función y sus dependencias. JavaScript está desactivado o no está disponible en su navegador. Para utilizar la documentación de AWS, debe estar habilitado JavaScript. Para obtener más información, consulte las páginas de ayuda de su navegador. Convenciones del documento S3 Tutorial: Uso de un desencadenador de Amazon S3 para crear miniaturas ¿Le ha servido de ayuda esta página? - Sí Gracias por hacernos saber que estamos haciendo un buen trabajo. Si tiene un momento, díganos qué es lo que le ha gustado para que podamos seguir trabajando en esa línea. ¿Le ha servido de ayuda esta página? - No Gracias por informarnos de que debemos trabajar en esta página. Lamentamos haberle defraudado. Si tiene un momento, díganos cómo podemos mejorar la documentación. | 2026-01-13T09:30:35 |
https://www.php.net/manual/ja/function.mailparse-uudecode-all.php | PHP: mailparse_uudecode_all - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box 数学 » « mailparse_stream_encode PHP マニュアル 関数リファレンス メール関連 Mailparse Mailparse 関数 Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other mailparse_uudecode_all (PECL mailparse >= 0.9.0) mailparse_uudecode_all — ファイルポインタからデータをスキャンし、uuencode されたファイルを展開する 説明 mailparse_uudecode_all ( resource $fp ): array 指定したファイルポインタからのデータを読み込み、 uuencode されたファイルを一時ファイルに展開します。 パラメータ fp 有効なファイルポインタ。 戻り値 ファイル名の情報を含む連想配列の配列を返します。 filename 作成された一時ファイルへのパス。 origfilename もとのファイル名。uuencode されたパートにのみ存在します。 最初の filename エントリがメッセージ本文、次のエントリがデコードされた uuencode ファイルとなります。 例 例1 mailparse_uudecode_all() の例 <?php $text = <<<EOD To: fred@example.com hello, this is some text hello. blah blah blah. begin 644 test.txt /=&AI<R!I<R!A('1E<W0* ` end EOD; $fp = tmpfile (); fwrite ( $fp , $text ); $data = mailparse_uudecode_all ( $fp ); echo "BODY\n" ; readfile ( $data [ 0 ][ "filename" ]); echo "UUE ( { $data [ 1 ][ 'origfilename' ]} )\n" ; readfile ( $data [ 1 ][ "filename" ]); // 後始末をします unlink ( $data [ 0 ][ "filename" ]); unlink ( $data [ 1 ][ "filename" ]); ?> 上の例の出力は以下となります。 BODY To: fred@example.com hello, this is some text hello. blah blah blah. UUE (test.txt) this is a test Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 1 note up down 0 mat at phpconsulting dot com ¶ 22 years ago As an alternative, uudecode() can be called as static function as follows: $file =& Mail_mimeDecode::uudecode($some_text); This will return the following arrays: @param string Input body to look for attachments in @return array Decoded bodies, filenames and permissions + add a note Mailparse 関数 mailparse_​determine_​best_​xfer_​encoding mailparse_​msg_​create mailparse_​msg_​extract_​part mailparse_​msg_​extract_​part_​file mailparse_​msg_​extract_​whole_​part_​file mailparse_​msg_​free mailparse_​msg_​get_​part mailparse_​msg_​get_​part_​data mailparse_​msg_​get_​structure mailparse_​msg_​parse mailparse_​msg_​parse_​file mailparse_​rfc822_​parse_​addresses mailparse_​stream_​encode mailparse_​uudecode_​all Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-create-function | Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Documentation AWS Lambda Developer Guide Create an Amazon S3 bucket Upload a test object to your bucket Create a permissions policy Create an execution role Create the Lambda function Deploy the function code Create the Amazon S3 trigger Test the Lambda function Clean up your resources Next steps Tutorial: Using an Amazon S3 trigger to invoke a Lambda function In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs. This tutorial demonstrates how to: Create an Amazon S3 bucket. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket. Test your function, first with a dummy event, and then using the trigger. By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console. Create an Amazon S3 bucket To create an Amazon S3 bucket Open the Amazon S3 console and select the General purpose buckets page. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region. Choose Create bucket . Under General configuration , do the following: For Bucket type , ensure General purpose is selected. For Bucket name , enter a globally unique name that meets the Amazon S3 Bucket naming rules . Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-). Leave all other options set to their default values and choose Create bucket . Upload a test object to your bucket To upload a test object Open the Buckets page of the Amazon S3 console and choose the bucket you created during the previous step. Choose Upload . Choose Add files and select the object that you want to upload. You can select any file (for example, HappyFace.jpg ). Choose Open , then choose Upload . Later in the tutorial, you’ll test your Lambda function using this object. Create a permissions policy Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. To create the policy Open the Policies page of the IAM console. Choose Create Policy . Choose the JSON tab, and then paste the following custom policy into the JSON editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choose Next: Tags . Choose Next: Review . Under Review policy , for the policy Name , enter s3-trigger-tutorial . Choose Create policy . Create an execution role An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step. To create an execution role and attach your custom permissions policy Open the Roles page of the IAM console. Choose Create role . For the type of trusted entity, choose AWS service , then for the use case, choose Lambda . Choose Next . In the policy search box, enter s3-trigger-tutorial . In the search results, select the policy that you created ( s3-trigger-tutorial ), and then choose Next . Under Role details , for the Role name , enter lambda-s3-trigger-role , then choose Create role . Create the Lambda function Create a Lambda function in the console using the Python 3.14 runtime. To create the Lambda function Open the Functions page of the Lambda console. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen. Choose Create function . Choose Author from scratch Under Basic information , do the following: For Function name , enter s3-trigger-tutorial For Runtime , choose Python 3.14 . For Architecture , choose x86_64 . In the Change default execution role tab, do the following: Expand the tab, then choose Use an existing role . Select the lambda-s3-trigger-role you created earlier. Choose Create function . Deploy the function code This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in. The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the event parameter it receives from Amazon S3. The function then uses the get_object method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object. To deploy the function code Choose the Python tab in the following box and copy the code. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consuming an S3 event with Lambda using TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } In the Code source pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created. In the DEPLOY section, choose Deploy to update your function's code: Create the Amazon S3 trigger To create the Amazon S3 trigger In the Function overview pane, choose Add trigger . Select S3 . Under Bucket , select the bucket you created earlier in the tutorial. Under Event types , be sure that All object create events is selected. Under Recursive invocation , select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. Choose Add . Note When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an event notification on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket. Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created: An error occurred when creating the trigger: Unable to validate the following destination configurations. You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies. Test your Lambda function with a dummy event To test the Lambda function with a dummy event In the Lambda console page for your function, choose the Test tab. For Event name , enter MyTestEvent . In the Event JSON , paste the following test event. Be sure to replace these values: Replace us-east-1 with the region you created your Amazon S3 bucket in. Replace both instances of amzn-s3-demo-bucket with the name of your own Amazon S3 bucket. Replace test%2FKey with the name of the test object you uploaded to your bucket earlier (for example, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choose Save . Choose Test . If your function runs successfully, you’ll see output similar to the following in the Execution results tab. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test the Lambda function with the Amazon S3 trigger To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output. To upload an object to your Amazon S3 bucket Open the Buckets page of the Amazon S3 console and choose the bucket that you created earlier. Choose Upload . Choose Add files and use the file selector to choose an object you want to upload. This object can be any file you choose. Choose Open , then choose Upload . To verify the function invocation using CloudWatch Logs Open the CloudWatch console. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen. Choose Logs , then choose Log groups . Choose the log group for your function ( /aws/lambda/s3-trigger-tutorial ). Under Log streams , choose the most recent log stream. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The CONTENT TYPE you see depends on the type of file you uploaded to your bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions , Delete . Type confirm in the text input field and choose Delete . To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete . Enter the name of the role in the text input field and choose Delete . To delete the S3 bucket Open the Amazon S3 console. Select the bucket you created. Choose Delete . Enter the name of the bucket in the text input field. Choose Delete bucket . Next steps In Tutorial: Using an Amazon S3 trigger to create thumbnail images , the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions S3 Tutorial: Use an Amazon S3 trigger to create thumbnails Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/pt_br/lambda/latest/dg/with-s3-example.html | Tutorial: Usar um acionador do Amazon S3 para invocar uma função do Lambda - AWS Lambda Tutorial: Usar um acionador do Amazon S3 para invocar uma função do Lambda - AWS Lambda Documentação AWS Lambda Guia do Desenvolvedor Criar um bucket do Amazon S3 Carregar um objeto de teste em um bucket Criação de uma política de permissões Criar uma função de execução Criar a função do Lambda Implantar o código da função Criar o acionador do Amazon S3 Testar a função do Lambda Limpe os recursos Próximas etapas Tutorial: Usar um acionador do Amazon S3 para invocar uma função do Lambda Neste tutorial, você usará o console para criar uma função do Lambda e configurar um acionador para um bucket do Amazon Simple Storage Service (Amazon S3). Toda vez que você adiciona um objeto ao bucket do Amazon S3, sua função é executada e envia o tipo de objeto ao Amazon CloudWatch Logs. Este tutorial demonstra como: Crie um bucket do Amazon S3. Crie uma função do Lambda que retorne o tipo de objeto de um bucket do Amazon S3. Configure um acionador do Lambda que invoque a função quando forem carregados objetos para o bucket. Teste sua função, primeiro com um evento fictício e depois usando o acionador. Ao concluir essas etapas, você aprenderá a configurar uma função do Lambda a ser executada sempre que forem adicionados ou excluídos objetos de um bucket do Amazon S3. Você pode concluir este tutorial usando somente o Console de gerenciamento da AWS. Criar um bucket do Amazon S3 Como criar um bucket do Amazon S3 Abra o console do Amazon S3 e selecione a página Buckets de uso geral . Selecione a Região da AWS mais próxima de sua localização geográfica. É possível alterar a região usando a lista suspensa na parte superior da tela. Mais adiante no tutorial, você deverá criar sua função do Lambda na mesma região. Escolha Criar bucket . Em General configuration (Configuração geral), faça o seguinte: Em Tipo de bucket , certifique-se de que Uso geral esteja selecionado. Em Nome do bucket , insira um nome global exclusivo que atenda às regras de nomenclatura de buckets do Amazon S3. Os nomes dos buckets podem conter apenas letras minúsculas, números, pontos (.) e hifens (-). Deixe todas as outras opções com seus valores padrão e escolha Criar bucket . Carregar um objeto de teste em um bucket Para carregar um objeto de teste Abra a página Buckets do console do Amazon S3 e escolha o bucket que você criou durante a etapa anterior. Escolha Carregar . Escolha Adicionar arquivos e selecione o objeto que deseja de carregar. Você pode selecionar qualquer arquivo (por exemplo, HappyFace.jpg ). Selecione Abrir e Carregar . Posteriormente no tutorial, você testará a função do Lambda usando esse objeto. Criação de uma política de permissões Crie uma política de permissões que permita ao Lambda obter objetos de um bucket do Amazon S3 e gravar no Amazon CloudWatch Logs. Para criar a política Abra a página Policies (Políticas) do console do IAM. Escolha Create Policy . Escolha a guia JSON e cole a política personalizada a seguir no editor JSON. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Escolha Próximo: etiquetas . Selecione Próximo: revisar . No campo Política de revisão , em Nome da política, insira s3-trigger-tutorial . Escolha Criar política . Criar uma função de execução Um perfil de execução é um perfil do AWS Identity and Access Management (IAM) que concede a uma função do Lambda permissão para acessar recursos e Serviços da AWS. Nesta etapa, crie um perfil de execução usando a política de permissões que criou na etapa anterior. Para criar uma função de execução e vincular a política de permissões personalizada Abra a página Funções no console do IAM. Selecione Criar perfil . Para o tipo de entidade confiável, escolha Serviço da AWS e, em seguida, para o caso de uso, selecione Lambda . Escolha Próximo . Na caixa de pesquisa de política, insira s3-trigger-tutorial . Nos resultados da pesquisa, selecione a política que você criou ( s3-trigger-tutorial ) e, depois, escolha Next (Avançar). Em Role details (Detalhes do perfil), para Role name (Nome do perfil), insira lambda-s3-trigger-role e, em seguida, escolha Create role (Criar perfil). Criar a função do Lambda Crie uma função do Lambda no console usando o runtime do Python 3.13. Para criar a função do Lambda Abra a página Funções do console do Lambda. Verifique se você está trabalhando na mesma Região da AWS em que criou o bucket do Amazon S3. Você pode alterar sua região usando a lista suspensa na parte superior da tela. Escolha a opção Criar função . Escolher Criar do zero Em Basic information (Informações básicas), faça o seguinte: Em Nome da função , inserir s3-trigger-tutorial Em Runtime , selecione Python 3.13 . Em Architecture (Arquitetura), escolha x86_64 . Na guia Alterar função de execução padrão , faça o seguinte: Expanda a guia e escolha Usar uma função existente . Selecione a lambda-s3-trigger-role que você criou anteriormente. Escolha Criar função . Implantar o código da função Este tutorial usa o runtime do Python 3.13, mas também fornecemos exemplos de arquivos de código para outros runtimes. Você pode selecionar a guia na caixa a seguir para ver o código do runtime do seu interesse. A função do Lambda vai recuperar o nome da chave do objeto carregado e o nome do bucket do parâmetro event que receberá do Amazon S3. Em seguida, a função usará o método get_object do AWS SDK para Python (Boto3) para recuperar os metadados do objeto, incluindo o tipo de conteúdo (tipo MIME) do objeto carregado. Para implantar o código da função Escolha a guia Python na caixa a seguir e copie o código. .NET SDK para .NET nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK para Go V2 nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK para Java 2.x nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK para JavaScript (v3) nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consumir um evento do S3 com o Lambda usando TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK para PHP nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Como consumir um evento do S3 com o Lambda usando PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK para Python (Boto3). nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK para Ruby nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Como consumir um evento do S3 com o Lambda usando Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK para Rust nota Há mais no GitHub. Encontre o exemplo completo e saiba como configurar e executar no repositório dos Exemplos sem servidor . Consumir um evento do S3 com o Lambda usando Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } No painel de Origem do código no console do Lambda, cole o código no editor de código, substituindo o código criado pelo Lambda. Na seção DEPLOY , escolha Implantar para atualizar o código da função: Criar o acionador do Amazon S3 Para criar o acionador do Amazon S3 No painel Visão geral da função , escolha Adicionar gatilho . Selecione S3 . Em Bucket , selecione o bucket que você criou anteriormente no tutorial. Em Tipos de eventos , garanta que a opção Todos os eventos de criação de objetos esteja selecionada. Em Invocação recursiva , marque a caixa de seleção para confirmar que não é recomendável usar o mesmo bucket do Amazon S3 para entrada e saída. Escolha Adicionar . nota Quando você cria um acionador do Amazon S3 para uma função do Lambda usando o console do Lambda, o Amazon S3 configura uma notificação de evento no bucket que você especificar. Antes de configurar essa notificação de evento, o Amazon S3 executa uma série de verificações para confirmar que o destino do evento existe e tem as políticas do IAM necessárias. O Amazon S3 também executa esses testes em qualquer outra notificação de evento configurada para esse bucket. Por causa dessa verificação, se o bucket tiver destinos de eventos previamente configurados para recursos que não existem mais ou para recursos que não têm as políticas de permissões necessárias, o Amazon S3 não poderá criar a nova notificação de evento. Você verá a seguinte mensagem de erro indicando que não foi possível criar seu acionador: An error occurred when creating the trigger: Unable to validate the following destination configurations. Você poderá ver esse erro se tiver configurado anteriormente um acionador para outra função do Lambda usando o mesmo bucket e, desde então, tiver excluído a função ou modificado suas políticas de permissões. Testar sua função do Lambda com um evento fictício Para testar a função do Lambda com um evento fictício Na página de console do Lambda da sua função, escolha a guia Testar . Em Nome do evento , insira MyTestEvent . Em Evento JSON , cole o seguinte evento de teste. Não se esqueça de substituir estes valores: Substitua us-east-1 pela região em que você criou o bucket do Amazon S3. Substitua ambas as instâncias de amzn-s3-demo-bucket pelo nome do seu próprio bucket do Amazon S3. Substitua test%2FKey pelo nome do objeto de teste que você carregou anteriormente para o bucket (por exemplo, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Escolha Salvar . Escolha Test (Testar). Se a função for executada com êxito, você verá uma saída semelhante à seguinte na guia Resultados da execução . Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Testar a função do Lambda com o acionador do Amazon S3 Para testar a função com o gatilho configurado, carregue um objeto para o bucket do Amazon S3 usando o console. Para verificar se a função do Lambda foi executada conforme planejado, use o CloudWatch Logs para visualizar a saída da sua função. Para carregar um objeto para o bucket do Amazon S3 Abra a página Buckets do console do Amazon S3 e escolha o bucket que você criou anteriormente. Escolha Carregar . Escolha Adicionar arquivos e use o seletor de arquivos para escolher um objeto que você deseje carregar. Esse objeto pode ser qualquer arquivo que você escolher. Selecione Abrir e Carregar . Para verificar a invocação da função usando o CloudWatch Logs Abra o console do CloudWatch . Verifique se você está trabalhando na mesma Região da AWS em que criou a função do Lambda. Você pode alterar sua região usando a lista suspensa na parte superior da tela. Escolha Logs e depois escolha Grupos de logs . Escolha o nome do grupo de logs para sua função ( /aws/lambda/s3-trigger-tutorial ). Em Fluxos de logs , escolha o fluxo de logs mais recente. Se sua função tiver sido invocada corretamente em resposta ao gatilho do Amazon S3, você verá uma saída semelhante à seguinte. O CONTENT TYPE que você vê depende do tipo de arquivo que você carregou no bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Limpe os recursos Agora você pode excluir os recursos criados para este tutorial, a menos que queira mantê-los. Excluindo os recursos da AWS que você não está mais usando, você evita cobranças desnecessárias em sua Conta da AWS. Como excluir a função do Lambda Abra a página Functions (Funções) no console do Lambda. Selecione a função que você criou. Selecione Ações , Excluir . Digite confirm no campo de entrada de texto e escolha Delete (Excluir). Para excluir a função de execução Abra a página Roles (Funções) no console do IAM. Selecione a função de execução que você criou. Escolha Excluir . Insira o nome do perfil no campo de entrada de texto e escolha Delete (Excluir). Para excluir o bucket do S3 Abra o console Amazon S3 . Selecione o bucket que você criou. Escolha Excluir . Insira o nome do bucket no campo de entrada de texto. Escolha Excluir bucket . Próximas etapas No Tutorial: Usar um acionador do Amazon S3 para criar imagens em miniatura , o gatilho do Amazon S3 invoca uma função para criar uma imagem em miniatura para cada arquivo de imagem que é carregado para um bucket. Este tutorial requer um nível moderado de conhecimento de domínios da AWS e do Lambda. Ele demonstra como criar recursos usando a AWS Command Line Interface (AWS CLI) e como criar um arquivo .zip de pacote de implantação de arquivamento para a sua função e suas dependências. O Javascript está desativado ou não está disponível no seu navegador. Para usar a documentação da AWS, o Javascript deve estar ativado. Consulte as páginas de Ajuda do navegador para obter instruções. Convenções do documento S3 Tutorial: Usar um acionador do Amazon S3 para criar miniaturas Essa página foi útil? - Sim Obrigado por nos informar que estamos fazendo um bom trabalho! Se tiver tempo, conte-nos sobre o que você gostou para que possamos melhorar ainda mais. Essa página foi útil? - Não Obrigado por nos informar que precisamos melhorar a página. Lamentamos ter decepcionado você. Se tiver tempo, conte-nos como podemos melhorar a documentação. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/de_de/lambda/latest/dg/with-s3-example.html | Tutorial: Verwenden eines Amazon-S3-Auslösers zum Aufrufen einer Lambda-Funktion - AWS Lambda Tutorial: Verwenden eines Amazon-S3-Auslösers zum Aufrufen einer Lambda-Funktion - AWS Lambda Dokumentation AWS Lambda Entwicklerhandbuch Erstellen Sie einen Amazon S3 S3-Bucket Hochladen eines Testobjekts in Ihren Bucket Erstellen einer Berechtigungsrichtlinie Erstellen einer Ausführungsrolle So erstellen Sie die Lambda-Funktion: Bereitstellen des Funktionscodes Erstellen des Amazon-S3-Auslösers Lambda-Funktion testen Bereinigen Ihrer Ressourcen Nächste Schritte Die vorliegende Übersetzung wurde maschinell erstellt. Im Falle eines Konflikts oder eines Widerspruchs zwischen dieser übersetzten Fassung und der englischen Fassung (einschließlich infolge von Verzögerungen bei der Übersetzung) ist die englische Fassung maßgeblich. Tutorial: Verwenden eines Amazon-S3-Auslösers zum Aufrufen einer Lambda-Funktion In diesem Tutorial verwenden Sie die Konsole, um eine Lambda-Funktion zu erstellen und einen Auslöser für einen Amazon-Simple-Storage-Service-Bucket (Amazon-S3-Bucket) zu konfigurieren. Jedes Mal, wenn Sie Ihrem Amazon S3 S3-Bucket ein Objekt hinzufügen, wird Ihre Funktion ausgeführt und der Objekttyp wird an Amazon CloudWatch Logs ausgegeben. Dieses Tutorial zeigt, wie man: Erstellen Sie einen Amazon-S3-Bucket. Erstellen Sie eine Lambda-Funktion, die den Objekttyp von Objekten in einem Amazon-S3-Bucket ausgibt. Konfigurieren Sie einen Lambda-Auslöser, der Ihre Funktion aufruft, wenn Objekte in Ihren Bucket hochgeladen werden. Testen Sie Ihre Funktion, zuerst mit einem Dummy-Ereignis und dann mit dem Auslöser. Durch das Ausführen dieser Schritte erfahren Sie, wie Sie eine Lambda-Funktion so konfigurieren, dass sie ausgeführt wird, wenn Objekte einem Amazon-S3-Bucket hinzugefügt oder daraus gelöscht werden. Sie können dieses Tutorial abschließen, indem Sie nur die AWS-Managementkonsole verwenden. Erstellen Sie einen Amazon S3 S3-Bucket So erstellen Sie einen Amazon-S3-Bucket Öffnen Sie die Amazon-S3-Konsole und wählen Sie die Seite Allzweck-Buckets aus. Wählen Sie den, der Ihrem geografischen Standort AWS-Region am nächsten liegt. Sie können Ihre Region mithilfe der Dropdown-Liste am oberen Bildschirmrand ändern. Später im Tutorial müssen Sie eine Lambda-Funktion in derselben Region erstellen. Wählen Sie Create Bucket (Bucket erstellen) aus. Führen Sie unter Allgemeine Konfiguration die folgenden Schritte aus: Stellen Sie sicher, dass für Bucket-Typ die Option Allzweck ausgewählt ist. Geben Sie für den Bucket-Namen einen global eindeutigen Namen ein, der den Regeln für die Bucket-Benennung von Amazon S3 entspricht. Bucket-Namen dürfen nur aus Kleinbuchstaben, Zahlen, Punkten (.) und Bindestrichen (-) bestehen. Belassen Sie alle anderen Optionen auf ihren Standardwerten und wählen Sie Bucket erstellen aus. Hochladen eines Testobjekts in Ihren Bucket Hochladen eines Testobjekts Öffnen Sie die Buckets-Seite der Amazon-S3-Konsole und wählen Sie den Bucket aus, den Sie im vorherigen Schritt erstellt haben. Klicken Sie auf Hochladen . Wählen Sie Dateien hinzufügen aus und wählen Sie das Objekt, das Sie hochladen möchten. Sie können eine beliebige Datei auswählen (z. B. HappyFace.jpg ). Wählen Sie Öffnen und anschließend Hochladen aus. Im weiteren Verlauf des Tutorials werden Sie Ihre Lambda-Funktion mit diesem Objekt testen. Erstellen einer Berechtigungsrichtlinie Erstellen Sie eine Berechtigungsrichtlinie, die es Lambda ermöglicht, Objekte aus einem Amazon S3 S3-Bucket abzurufen und in Amazon CloudWatch Logs zu schreiben. So erstellen Sie die Richtlinie Öffnen Sie die Seite Richtlinien in der IAM-Konsole. Wählen Sie Richtlinie erstellen aus. Wählen Sie die Registerkarte JSON aus und kopieren Sie dann die folgende benutzerdefinierte JSON-Richtlinie in den JSON-Editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Wählen Sie Next: Tags (Weiter: Tags) aus. Klicken Sie auf Weiter: Prüfen . Geben Sie unter Review policy (Richtlinie prüfen) für den Richtlinien- Namen s3-trigger-tutorial ein. Wählen Sie Richtlinie erstellen aus. Erstellen einer Ausführungsrolle Eine Ausführungsrolle ist eine AWS Identity and Access Management (IAM-) Rolle, die einer Lambda-Funktion Zugriff AWS-Services und Ressourcen gewährt. In diesem Schritt erstellen Sie eine Ausführungsrolle unter Verwendung der Berechtigungsrichtlinie, die Sie im vorherigen Schritt erstellt haben. So erstellen Sie eine Ausführungsrolle und fügen Ihre benutzerdefinierte Berechtigungsrichtlinie hinzu Öffnen Sie die Seite Roles (Rollen) in der IAM-Konsole. Wählen Sie Rolle erstellen aus. Wählen Sie als Typ der vertrauenswürdigen Entität AWS -Service und dann als Anwendungsfall Lambda aus. Wählen Sie Weiter aus. Geben Sie im Feld für die Richtliniensuche s3-trigger-tutorial ein. Wählen Sie in den Suchergebnissen die von Ihnen erstellte Richtlinie ( s3-trigger-tutorial ) und dann die Option Next (Weiter) aus. Geben Sie unter Role details (Rollendetails) für den Role name (Rollennamen) lambda-s3-trigger-role ein und wählen Sie dann Create role (Rolle erstellen) aus. So erstellen Sie die Lambda-Funktion: Erstellen Sie mit der Python 3.14-Laufzeit eine Lambda-Funktion in der Konsole. So erstellen Sie die Lambda-Funktion: Öffnen Sie die Seite Funktionen der Lambda-Konsole. Stellen Sie sicher, dass Sie in demselben Modus arbeiten, in dem AWS-Region Sie Ihren Amazon S3 S3-Bucket erstellt haben. Sie können Ihre Region mithilfe der Dropdown-Liste oben auf dem Bildschirm ändern. Wählen Sie Funktion erstellen . Wählen Sie Ohne Vorgabe erstellen aus. Führen Sie unter Basic information (Grundlegende Informationen) die folgenden Schritte aus: Geben Sie unter Funktionsname s3-trigger-tutorial ein. Wählen Sie für Runtime Python 3.14 . Wählen Sie für Architektur x86_64 aus. Gehen Sie auf der Registerkarte Standard-Ausführungsrolle ändern wie folgt vor: Erweitern Sie die Registerkarte und wählen Sie dann Verwenden einer vorhandenen Rolle aus. Wählen Sie die zuvor erstellte lambda-s3-trigger-role aus. Wählen Sie Funktion erstellen . Bereitstellen des Funktionscodes Dieses Tutorial verwendet die Python 3.14-Laufzeit, aber wir haben auch Beispielcodedateien für andere Laufzeiten bereitgestellt. Sie können die Registerkarte im folgenden Feld auswählen, um den Code für die gewünschte Laufzeit anzusehen. Die Lambda-Funktion ruft den Schlüsselnamen des hochgeladenen Objekts und den Namen des Buckets aus dem event -Parameter ab, den sie von Amazon S3 erhält. Die Funktion verwendet dann die Methode get_object von, AWS SDK für Python (Boto3) um die Metadaten des Objekts abzurufen, einschließlich des Inhaltstyps (MIME-Typ) des hochgeladenen Objekts. So stellen Sie den Funktionscode bereit Wählen Sie im folgenden Feld die Registerkarte Python und kopieren Sie den Code. .NET SDK für .NET Anmerkung Es gibt noch mehr dazu. GitHub Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von .NET // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK für Go V2 Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Go // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK für Java 2.x Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Java // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK für JavaScript (v3) Anmerkung Es gibt noch mehr dazu GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Konsumieren eines S3-Ereignisses mit Lambda unter Verwendung JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Konsumieren eines S3-Ereignisses mit Lambda unter Verwendung TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK für PHP Anmerkung Es gibt noch mehr dazu. GitHub Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK für Python (Boto3) Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Python # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK für Ruby Anmerkung Es gibt noch mehr GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK für Rust Anmerkung Es gibt noch mehr dazu GitHub. Das vollständige Beispiel sowie eine Anleitung zum Einrichten und Ausführen finden Sie im Repository mit Serverless-Beispielen . Nutzen eines S3-Ereignisses mit Lambda unter Verwendung von Rust // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } Fügen Sie den Code im Code-Quell bereich der Lambda-Konsole in den Code-Editor ein und ersetzen Sie dabei den von Lambda erstellten Code. Wählen Sie im Abschnitt DEPLOY die Option Deploy aus, um den Code Ihrer Funktion zu aktualisieren: Erstellen des Amazon-S3-Auslösers Erstellen des Amazon-S3-Auslösers Wählen Sie im Bereich Function overview (Funktionsübersicht) die Option Add trigger (Auslöser hinzufügen) . Wählen Sie S3 aus. Wählen Sie unter Bucket den Bucket aus, den Sie zuvor im Tutorial erstellt haben. Vergewissern Sie sich, dass unter Ereignistypen die Option Alle Ereignisse zur Objekterstellung ausgewählt ist. Aktivieren Sie unter Rekursiver Aufruf das Kontrollkästchen, um zu bestätigen, dass die Verwendung desselben Amazon-S3-Buckets für die Ein- und Ausgabe nicht empfohlen wird. Wählen Sie Hinzufügen aus. Anmerkung Wenn Sie einen Amazon-S3-Trigger für eine Lambda-Funktion mithilfe der Lambda-Konsole erstellen, konfiguriert Amazon S3 eine Ereignisbenachrichtigung für den von Ihnen angegebenen Bucket. Bevor diese Ereignisbenachrichtigung konfiguriert wird, führt Amazon S3 eine Reihe von Prüfungen durch, um zu bestätigen, dass das Ereignisziel existiert und die erforderlichen IAM-Richtlinien hat. Amazon S3 führt diese Tests auch bei allen anderen für diesen Bucket konfigurierten Ereignisbenachrichtigungen durch. Wenn der Bucket zuvor Ereignisziele für Ressourcen konfiguriert hat, die nicht mehr existieren oder für Ressourcen, die nicht über die erforderlichen Berechtigungsrichtlinien verfügen, kann Amazon S3 aufgrund dieser Prüfung keine neue Ereignisbenachrichtigung erstellen. Es wird die folgende Fehlermeldung angezeigt, die besagt, dass Ihr Auslöser nicht erstellt werden konnte: An error occurred when creating the trigger: Unable to validate the following destination configurations. Dieser Fehler kann auftreten, wenn Sie zuvor einen Auslöser für eine andere Lambda-Funktion konfiguriert haben, die denselben Bucket verwendet und Sie die Funktion inzwischen gelöscht oder ihre Berechtigungsrichtlinien geändert haben. Testen Ihrer Lambda-Funktion mit einem Dummy-Ereignis So testen Sie die Lambda-Funktion mit einem Dummy-Ereignis Wählen Sie auf der Konsolenseite für Ihre Funktion die Registerkarte Test aus. Geben Sie für Event name (Ereignisname) MyTestEvent ein. Fügen Sie im Event-JSON das folgende Testereignis ein. Achten Sie darauf, diese Werte zu ersetzen: Ersetzen Sie us-east-1 durch die Region, in der Sie den Amazon-S3-Bucket erstellt haben. Ersetzen Sie beide Vorkommen von amzn-s3-demo-bucket durch den Namen Ihres eigenen Amazon-S3-Buckets. Ersetzen Sie test%2FKey durch den Namen des Testobjekts, das Sie zuvor in Ihren Bucket hochgeladen haben (z. B. HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Wählen Sie Speichern . Wählen Sie Test aus. Wenn Ihre Funktion erfolgreich ausgeführt wird, wird auf der Registerkarte Ausführungsergebnisse eine Ausgabe angezeigt, die der folgenden ähnelt. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Testen der Lambda-Funktion mit dem Amazon-S3-Auslöser Um Ihre Funktion mit dem konfigurierten Auslöser zu testen, können Sie unter Verwendung der Konsole ein Objekt in Ihren Amazon-S3-Bucket hochladen. Um zu überprüfen, ob Ihre Lambda-Funktion wie erwartet ausgeführt wurde, verwenden Sie CloudWatch Logs, um die Ausgabe Ihrer Funktion einzusehen. So laden Sie Objekte in Ihren Amazon-S3-Bucket hoch Öffnen Sie die Buckets -Seite der Amazon-S3-Konsole und wählen Sie den Bucket aus, den Sie zuvor erstellt haben. Klicken Sie auf Upload . Wählen Sie Dateien hinzufügen aus und wählen Sie mit der Dateiauswahl ein Objekt aus, das Sie hochladen möchten. Bei diesem Objekt kann es sich um eine beliebige von Ihnen ausgewählte Datei handeln. Wählen Sie Öffnen und anschließend Hochladen aus. Um den Funktionsaufruf mithilfe von Logs zu überprüfen CloudWatch Öffnen Sie die CloudWatch-Konsole . Stellen Sie sicher, dass Sie in derselben Umgebung arbeiten, in der AWS-Region Sie Ihre Lambda-Funktion erstellt haben. Sie können Ihre Region mithilfe der Dropdown-Liste oben auf dem Bildschirm ändern. Wählen Sie Protokolle und anschließend Protokollgruppen aus. Wählen Sie die Protokollgruppe für Ihre Funktion ( /aws/lambda/s3-trigger-tutorial ) aus. Wählen Sie im Bereich Protokollstreams den neuesten Protokollstream aus. Wenn Ihre Funktion als Reaktion auf Ihren Amazon-S3-Auslöser korrekt aufgerufen wurde, erhalten Sie eine Ausgabe, die der folgenden ähnelt. Welchen CONTENT TYPE Sie sehen, hängt von der Art der Datei ab, die Sie in Ihren Bucket hochgeladen haben. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Bereinigen Ihrer Ressourcen Sie können jetzt die Ressourcen, die Sie für dieses Tutorial erstellt haben, löschen, es sei denn, Sie möchten sie behalten. Indem Sie AWS Ressourcen löschen, die Sie nicht mehr verwenden, vermeiden Sie unnötige Kosten für Ihre AWS-Konto. So löschen Sie die Lambda-Funktion: Öffnen Sie die Seite Funktionen der Lambda-Konsole. Wählen Sie die Funktion aus, die Sie erstellt haben. Wählen Sie Aktionen , Löschen aus. Geben Sie confirm in das Texteingabefeld ein und wählen Sie Delete (Löschen) aus. So löschen Sie die Ausführungsrolle Öffnen Sie die Seite Roles in der IAM-Konsole. Wählen Sie die von Ihnen erstellte Ausführungsrolle aus. Wählen Sie Löschen aus. Geben Sie den Namen der Rolle in das Texteingabefeld ein und wählen Sie Delete (Löschen) aus. So löschen Sie den S3-Bucket: Öffnen Sie die Amazon S3-Konsole . Wählen Sie den Bucket aus, den Sie erstellt haben. Wählen Sie Löschen aus. Geben Sie den Namen des Buckets in das Texteingabefeld ein. Wählen Sie Bucket löschen aus. Nächste Schritte In Tutorial: Verwenden eines Amazon-S3-Auslösers zum Erstellen von Miniaturbildern ruft der Amazon-S3-Auslöser eine Funktion auf, die für jede Bilddatei, die in einen Bucket hochgeladen wird, ein Miniaturbild erstellt. Dieses Tutorial erfordert ein gewisses Maß AWS an Wissen über Lambda-Domänen. Es zeigt, wie Ressourcen mithilfe von AWS Command Line Interface (AWS CLI) erstellt werden und wie ein Bereitstellungspaket für das ZIP-Dateiarchiv für die Funktion und ihre Abhängigkeiten erstellt wird. JavaScript ist in Ihrem Browser nicht verfügbar oder deaktiviert. Zur Nutzung der AWS-Dokumentation muss JavaScript aktiviert sein. Weitere Informationen finden auf den Hilfe-Seiten Ihres Browsers. Dokumentkonventionen S3 Tutorial: Verwenden eines Amazon-S3-Auslösers zum Erstellen von Miniaturbildern Hat Ihnen diese Seite geholfen? – Ja Vielen Dank, dass Sie uns mitgeteilt haben, dass wir gute Arbeit geleistet haben! Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, was wir richtig gemacht haben, damit wir noch besser werden? Hat Ihnen diese Seite geholfen? – Nein Vielen Dank, dass Sie uns mitgeteilt haben, dass diese Seite überarbeitet werden muss. Es tut uns Leid, dass wir Ihnen nicht weiterhelfen konnten. Würden Sie sich einen Moment Zeit nehmen, um uns mitzuteilen, wie wir die Dokumentation verbessern können? | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-create-policy | Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Documentation AWS Lambda Developer Guide Create an Amazon S3 bucket Upload a test object to your bucket Create a permissions policy Create an execution role Create the Lambda function Deploy the function code Create the Amazon S3 trigger Test the Lambda function Clean up your resources Next steps Tutorial: Using an Amazon S3 trigger to invoke a Lambda function In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs. This tutorial demonstrates how to: Create an Amazon S3 bucket. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket. Test your function, first with a dummy event, and then using the trigger. By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console. Create an Amazon S3 bucket To create an Amazon S3 bucket Open the Amazon S3 console and select the General purpose buckets page. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region. Choose Create bucket . Under General configuration , do the following: For Bucket type , ensure General purpose is selected. For Bucket name , enter a globally unique name that meets the Amazon S3 Bucket naming rules . Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-). Leave all other options set to their default values and choose Create bucket . Upload a test object to your bucket To upload a test object Open the Buckets page of the Amazon S3 console and choose the bucket you created during the previous step. Choose Upload . Choose Add files and select the object that you want to upload. You can select any file (for example, HappyFace.jpg ). Choose Open , then choose Upload . Later in the tutorial, you’ll test your Lambda function using this object. Create a permissions policy Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. To create the policy Open the Policies page of the IAM console. Choose Create Policy . Choose the JSON tab, and then paste the following custom policy into the JSON editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choose Next: Tags . Choose Next: Review . Under Review policy , for the policy Name , enter s3-trigger-tutorial . Choose Create policy . Create an execution role An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step. To create an execution role and attach your custom permissions policy Open the Roles page of the IAM console. Choose Create role . For the type of trusted entity, choose AWS service , then for the use case, choose Lambda . Choose Next . In the policy search box, enter s3-trigger-tutorial . In the search results, select the policy that you created ( s3-trigger-tutorial ), and then choose Next . Under Role details , for the Role name , enter lambda-s3-trigger-role , then choose Create role . Create the Lambda function Create a Lambda function in the console using the Python 3.14 runtime. To create the Lambda function Open the Functions page of the Lambda console. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen. Choose Create function . Choose Author from scratch Under Basic information , do the following: For Function name , enter s3-trigger-tutorial For Runtime , choose Python 3.14 . For Architecture , choose x86_64 . In the Change default execution role tab, do the following: Expand the tab, then choose Use an existing role . Select the lambda-s3-trigger-role you created earlier. Choose Create function . Deploy the function code This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in. The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the event parameter it receives from Amazon S3. The function then uses the get_object method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object. To deploy the function code Choose the Python tab in the following box and copy the code. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consuming an S3 event with Lambda using TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } In the Code source pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created. In the DEPLOY section, choose Deploy to update your function's code: Create the Amazon S3 trigger To create the Amazon S3 trigger In the Function overview pane, choose Add trigger . Select S3 . Under Bucket , select the bucket you created earlier in the tutorial. Under Event types , be sure that All object create events is selected. Under Recursive invocation , select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. Choose Add . Note When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an event notification on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket. Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created: An error occurred when creating the trigger: Unable to validate the following destination configurations. You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies. Test your Lambda function with a dummy event To test the Lambda function with a dummy event In the Lambda console page for your function, choose the Test tab. For Event name , enter MyTestEvent . In the Event JSON , paste the following test event. Be sure to replace these values: Replace us-east-1 with the region you created your Amazon S3 bucket in. Replace both instances of amzn-s3-demo-bucket with the name of your own Amazon S3 bucket. Replace test%2FKey with the name of the test object you uploaded to your bucket earlier (for example, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choose Save . Choose Test . If your function runs successfully, you’ll see output similar to the following in the Execution results tab. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test the Lambda function with the Amazon S3 trigger To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output. To upload an object to your Amazon S3 bucket Open the Buckets page of the Amazon S3 console and choose the bucket that you created earlier. Choose Upload . Choose Add files and use the file selector to choose an object you want to upload. This object can be any file you choose. Choose Open , then choose Upload . To verify the function invocation using CloudWatch Logs Open the CloudWatch console. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen. Choose Logs , then choose Log groups . Choose the log group for your function ( /aws/lambda/s3-trigger-tutorial ). Under Log streams , choose the most recent log stream. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The CONTENT TYPE you see depends on the type of file you uploaded to your bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions , Delete . Type confirm in the text input field and choose Delete . To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete . Enter the name of the role in the text input field and choose Delete . To delete the S3 bucket Open the Amazon S3 console. Select the bucket you created. Choose Delete . Enter the name of the bucket in the text input field. Choose Delete bucket . Next steps In Tutorial: Using an Amazon S3 trigger to create thumbnail images , the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions S3 Tutorial: Use an Amazon S3 trigger to create thumbnails Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/es_es/AmazonCloudFront/latest/DeveloperGuide/lambda-examples.html#lambda-examples-vary-on-device-type | Funciones de ejemplo de Lambda@Edge - Amazon CloudFront Funciones de ejemplo de Lambda@Edge - Amazon CloudFront Documentación Amazon CloudFront Guía para desarrolladores Ejemplos generales Generación de respuestas: ejemplos Cadenas de consulta: ejemplos Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos Selección de origen dinámico basada en contenido: ejemplos Actualización de estados de error: ejemplos Acceso al cuerpo de la solicitud: ejemplos Funciones de ejemplo de Lambda@Edge Consulte los ejemplos siguientes para usar funciones de Lambda con Amazon CloudFront. nota Si elige el tiempo de ejecución Node.js 18 o una versión posterior para la función Lambda@Edge, se creará automáticamente un archivo index.mjs . Para usar los siguientes ejemplos de código, cambie el nombre del archivo index.mjs a index.js . Temas Ejemplos generales Generación de respuestas: ejemplos Cadenas de consulta: ejemplos Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos Selección de origen dinámico basada en contenido: ejemplos Actualización de estados de error: ejemplos Acceso al cuerpo de la solicitud: ejemplos Ejemplos generales Los ejemplos siguientes muestran formas habituales de usar Lambda@Edge en CloudFront. Temas Ejemplo: prueba A/B Ejemplo: Sobrescritura de un encabezado de respuesta Ejemplo: prueba A/B Puede utilizar el siguiente ejemplo para probar dos versiones diferentes de una imagen sin crear redirecciones ni cambiar la dirección URL. En este ejemplo se leen las cookies de la solicitud del lector y se modifica la URL de la solicitud en consecuencia. Si el espectador no envía una cookie con uno de los valores esperados, el ejemplo asigna aleatoriamente al espectador una de las URL. Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; if (request.uri !== '/experiment-pixel.jpg') { // do not process if this is not an A-B test request callback(null, request); return; } const cookieExperimentA = 'X-Experiment-Name=A'; const cookieExperimentB = 'X-Experiment-Name=B'; const pathExperimentA = '/experiment-group/control-pixel.jpg'; const pathExperimentB = '/experiment-group/treatment-pixel.jpg'; /* * Lambda at the Edge headers are array objects. * * Client may send multiple Cookie headers, i.e.: * > GET /viewerRes/test HTTP/1.1 * > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 * > Cookie: First=1; Second=2 * > Cookie: ClientCode=abc * > Host: example.com * * You can access the first Cookie header at headers["cookie"][0].value * and the second at headers["cookie"][1].value. * * Header values are not parsed. In the example above, * headers["cookie"][0].value is equal to "First=1; Second=2" */ let experimentUri; if (headers.cookie) { for (let i = 0; i < headers.cookie.length; i++) { if (headers.cookie[i].value.indexOf(cookieExperimentA) >= 0) { console.log('Experiment A cookie found'); experimentUri = pathExperimentA; break; } else if (headers.cookie[i].value.indexOf(cookieExperimentB) >= 0) { console.log('Experiment B cookie found'); experimentUri = pathExperimentB; break; } } } if (!experimentUri) { console.log('Experiment cookie has not been found. Throwing dice...'); if (Math.random() < 0.75) { experimentUri = pathExperimentA; } else { experimentUri = pathExperimentB; } } request.uri = experimentUri; console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python import json import random def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] if request['uri'] != '/experiment-pixel.jpg': # Not an A/B Test return request cookieExperimentA, cookieExperimentB = 'X-Experiment-Name=A', 'X-Experiment-Name=B' pathExperimentA, pathExperimentB = '/experiment-group/control-pixel.jpg', '/experiment-group/treatment-pixel.jpg' ''' Lambda at the Edge headers are array objects. Client may send multiple cookie headers. For example: > GET /viewerRes/test HTTP/1.1 > User-Agent: curl/7.18.1 (x86_64-unknown-linux-gnu) libcurl/7.18.1 OpenSSL/1.0.1u zlib/1.2.3 > Cookie: First=1; Second=2 > Cookie: ClientCode=abc > Host: example.com You can access the first Cookie header at headers["cookie"][0].value and the second at headers["cookie"][1].value. Header values are not parsed. In the example above, headers["cookie"][0].value is equal to "First=1; Second=2" ''' experimentUri = "" for cookie in headers.get('cookie', []): if cookieExperimentA in cookie['value']: print("Experiment A cookie found") experimentUri = pathExperimentA break elif cookieExperimentB in cookie['value']: print("Experiment B cookie found") experimentUri = pathExperimentB break if not experimentUri: print("Experiment cookie has not been found. Throwing dice...") if random.random() < 0.75: experimentUri = pathExperimentA else: experimentUri = pathExperimentB request['uri'] = experimentUri print(f"Request uri set to { experimentUri}") return request Ejemplo: Sobrescritura de un encabezado de respuesta En el ejemplo siguiente, se muestra cómo cambiar el valor de un encabezado de respuesta según el valor de otro encabezado. Node.js export const handler = async (event) => { const response = event.Records[0].cf.response; const headers = response.headers; const headerNameSrc = 'X-Amz-Meta-Last-Modified'; const headerNameDst = 'Last-Modified'; if (headers[headerNameSrc.toLowerCase()]) { headers[headerNameDst.toLowerCase()] = [ { key: headerNameDst, value: headers[headerNameSrc.toLowerCase()][0].value, }]; console.log(`Response header "$ { headerNameDst}" was set to ` + `"$ { headers[headerNameDst.toLowerCase()][0].value}"`); } return response; }; Python import json def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] headers = response['headers'] header_name_src = 'X-Amz-Meta-Last-Modified' header_name_dst = 'Last-Modified' if headers.get(header_name_src.lower()): headers[header_name_dst.lower()] = [ { 'key': header_name_dst, 'value': headers[header_name_src.lower()][0]['value'] }] print(f'Response header " { header_name_dst}" was set to ' f'" { headers[header_name_dst.lower()][0]["value"]}"') return response Generación de respuestas: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para generar respuestas. Temas Ejemplo: Envío de contenido estático (respuesta generada) Ejemplo: Generación de un redireccionamiento HTTP (respuesta generada) Ejemplo: Envío de contenido estático (respuesta generada) En el siguiente ejemplo se muestra cómo utilizar una función de Lambda para enviar contenido de sitio web estático, lo que reduce la carga en el servidor de origen y la latencia total. nota Puede generar respuestas HTTP para los eventos de solicitud del espectador y al origen. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . También puede sustituir o quitar el cuerpo de la respuesta HTTP en eventos de respuesta de origen. Para obtener más información, consulte Actualización de respuestas HTTP en desencadenadores de respuesta de origen . Node.js 'use strict'; const content = ` <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> `; exports.handler = (event, context, callback) => { /* * Generate HTTP OK response using 200 status code with HTML body. */ const response = { status: '200', statusDescription: 'OK', headers: { 'cache-control': [ { key: 'Cache-Control', value: 'max-age=100' }], 'content-type': [ { key: 'Content-Type', value: 'text/html' }] }, body: content, }; callback(null, response); }; Python import json CONTENT = """ <\!DOCTYPE html> <html lang="en"> <head> <meta charset="utf-8"> <title>Simple Lambda@Edge Static Content Response</title> </head> <body> <p>Hello from Lambda@Edge!</p> </body> </html> """ def lambda_handler(event, context): # Generate HTTP OK response using 200 status code with HTML body. response = { 'status': '200', 'statusDescription': 'OK', 'headers': { 'cache-control': [ { 'key': 'Cache-Control', 'value': 'max-age=100' } ], "content-type": [ { 'key': 'Content-Type', 'value': 'text/html' } ] }, 'body': CONTENT } return response Ejemplo: Generación de un redireccionamiento HTTP (respuesta generada) En el siguiente ejemplo se muestra cómo generar una redirección HTTP. nota Puede generar respuestas HTTP para los eventos de solicitud del espectador y al origen. Para obtener más información, consulte Generación de respuestas HTTP en los desencadenadores de solicitud . Node.js 'use strict'; exports.handler = (event, context, callback) => { /* * Generate HTTP redirect response with 302 status code and Location header. */ const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html', }], }, }; callback(null, response); }; Python def lambda_handler(event, context): # Generate HTTP redirect response with 302 status code and Location header. response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://docs.aws.amazon.com/lambda/latest/dg/lambda-edge.html' }] } } return response Cadenas de consulta: ejemplos En los ejemplos siguientes se muestran formas de usar Lambda@Edge con cadenas de consulta. Temas Ejemplo: Adición de un encabezado en función de un parámetro de la cadena de consulta Ejemplo: Normalización de parámetros de cadenas de consulta para mejorar la tasa de aciertos de caché Ejemplo: Redireccionamiento de los usuarios no autenticados a una página de inicio de sesión Ejemplo: Adición de un encabezado en función de un parámetro de la cadena de consulta El siguiente ejemplo muestra cómo obtener el par clave-valor de un parámetro de la cadena de consulta y, a continuación, añadir un encabezado en función de dichos valores. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When a request contains a query string key-value pair but the origin server * expects the value in a header, you can use this Lambda function to * convert the key-value pair to a header. Here's what the function does: * 1. Parses the query string and gets the key-value pair. * 2. Adds a header to the request using the key-value pair that the function got in step 1. */ /* Parse request querystring to get javascript object */ const params = querystring.parse(request.querystring); /* Move auth param from querystring to headers */ const headerName = 'Auth-Header'; request.headers[headerName.toLowerCase()] = [ { key: headerName, value: params.auth }]; delete params.auth; /* Update request querystring */ request.querystring = querystring.stringify(params); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When a request contains a query string key-value pair but the origin server expects the value in a header, you can use this Lambda function to convert the key-value pair to a header. Here's what the function does: 1. Parses the query string and gets the key-value pair. 2. Adds a header to the request using the key-value pair that the function got in step 1. ''' # Parse request querystring to get dictionary/json params = { k : v[0] for k, v in parse_qs(request['querystring']).items()} # Move auth param from querystring to headers headerName = 'Auth-Header' request['headers'][headerName.lower()] = [ { 'key': headerName, 'value': params['auth']}] del params['auth'] # Update request querystring request['querystring'] = urlencode(params) return request Ejemplo: Normalización de parámetros de cadenas de consulta para mejorar la tasa de aciertos de caché El siguiente ejemplo muestra cómo mejorar la tasa de acceso a la caché haciendo los siguientes cambios en las cadenas de consulta antes de que CloudFront reenvíe las solicitudes a su origen: Alfabetizar los pares de clave-valor por el nombre del parámetro. Cambiar a minúsculas el modelo de mayúsculas y minúsculas de los pares de clave-valor. Para obtener más información, consulte Almacenamiento en caché de contenido en función de parámetros de cadenas de consulta . Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /* When you configure a distribution to forward query strings to the origin and * to cache based on an allowlist of query string parameters, we recommend * the following to improve the cache-hit ratio: * - Always list parameters in the same order. * - Use the same case for parameter names and values. * * This function normalizes query strings so that parameter names and values * are lowercase and parameter names are in alphabetical order. * * For more information, see: * https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html */ console.log('Query String: ', request.querystring); /* Parse request query string to get javascript object */ const params = querystring.parse(request.querystring.toLowerCase()); const sortedParams = { }; /* Sort param keys */ Object.keys(params).sort().forEach(key => { sortedParams[key] = params[key]; }); /* Update request querystring with normalized */ request.querystring = querystring.stringify(sortedParams); callback(null, request); }; Python from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' When you configure a distribution to forward query strings to the origin and to cache based on an allowlist of query string parameters, we recommend the following to improve the cache-hit ratio: Always list parameters in the same order. - Use the same case for parameter names and values. This function normalizes query strings so that parameter names and values are lowercase and parameter names are in alphabetical order. For more information, see: https://docs.aws.amazon.com/AmazonCloudFront/latest/DeveloperGuide/QueryStringParameters.html ''' print("Query string: ", request["querystring"]) # Parse request query string to get js object params = { k : v[0] for k, v in parse_qs(request['querystring'].lower()).items()} # Sort param keys sortedParams = sorted(params.items(), key=lambda x: x[0]) # Update request querystring with normalized request['querystring'] = urlencode(sortedParams) return request Ejemplo: Redireccionamiento de los usuarios no autenticados a una página de inicio de sesión El siguiente ejemplo muestra cómo redirigir a los usuarios una página de inicio de sesión si no ha introducido sus credenciales. Node.js 'use strict'; function parseCookies(headers) { const parsedCookie = { }; if (headers.cookie) { headers.cookie[0].value.split(';').forEach((cookie) => { if (cookie) { const parts = cookie.split('='); parsedCookie[parts[0].trim()] = parts[1].trim(); } }); } return parsedCookie; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* Check for session-id in request cookie in viewer-request event, * if session-id is absent, redirect the user to sign in page with original * request sent as redirect_url in query params. */ /* Check for session-id in cookie, if present then proceed with request */ const parsedCookies = parseCookies(headers); if (parsedCookies && parsedCookies['session-id']) { callback(null, request); return; } /* URI encode the original request to be sent as redirect_url in query params */ const encodedRedirectUrl = encodeURIComponent(`https://$ { headers.host[0].value}$ { request.uri}?$ { request.querystring}`); const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: `https://www.example.com/signin?redirect_url=$ { encodedRedirectUrl}`, }], }, }; callback(null, response); }; Python import urllib def parseCookies(headers): parsedCookie = { } if headers.get('cookie'): for cookie in headers['cookie'][0]['value'].split(';'): if cookie: parts = cookie.split('=') parsedCookie[parts[0].strip()] = parts[1].strip() return parsedCookie def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Check for session-id in request cookie in viewer-request event, if session-id is absent, redirect the user to sign in page with original request sent as redirect_url in query params. ''' # Check for session-id in cookie, if present, then proceed with request parsedCookies = parseCookies(headers) if parsedCookies and parsedCookies['session-id']: return request # URI encode the original request to be sent as redirect_url in query params redirectUrl = "https://%s%s?%s" % (headers['host'][0]['value'], request['uri'], request['querystring']) encodedRedirectUrl = urllib.parse.quote_plus(redirectUrl.encode('utf-8')) response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': 'https://www.example.com/signin?redirect_url=%s' % encodedRedirectUrl }] } } return response Personalización de contenido por encabezados de tipo de dispositivo o país: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para personalizar el comportamiento en función de la ubicación o el tipo de dispositivo que usa el lector. Temas Ejemplo: Redireccionamiento de solicitudes de espectadores a una URL específica de un país Ejemplo: Envío de distintas versiones de un objeto en función del dispositivo Ejemplo: Redireccionamiento de solicitudes de espectadores a una URL específica de un país El siguiente ejemplo muestra cómo generar una respuesta de redireccionamiento HTTP con una URL específica del país y devolver la respuesta al espectador. Esto resulta útil cuando se quiere proporcionar respuestas específicas del país. Por ejemplo: Si tiene subdominios específicos de un país, como us.ejemplo.com y tw.ejemplo.com, puede generar una respuesta de redireccionamiento cuando un espectador solicite ejemplo.com. Si está haciendo streaming de video, pero no tiene derechos para transmitir el contenido en un país determinado, puede redirigir a los usuarios de dicho país a una página en la que se explica por qué no pueden ver el video. Tenga en cuenta lo siguiente: Debe configurar la distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . CloudFront agrega el encabezado CloudFront-Viewer-Country después del evento de solicitud del lector. Para utilizar este ejemplo, debe crear un activador para el evento de solicitud al origen. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Based on the value of the CloudFront-Viewer-Country header, generate an * HTTP status code 302 (Redirect) response, and return a country-specific * URL in the Location header. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ let url = 'https://example.com/'; if (headers['cloudfront-viewer-country']) { const countryCode = headers['cloudfront-viewer-country'][0].value; if (countryCode === 'TW') { url = 'https://tw.example.com/'; } else if (countryCode === 'US') { url = 'https://us.example.com/'; } } const response = { status: '302', statusDescription: 'Found', headers: { location: [ { key: 'Location', value: url, }], }, }; callback(null, response); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Based on the value of the CloudFront-Viewer-Country header, generate an HTTP status code 302 (Redirect) response, and return a country-specific URL in the Location header. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' url = 'https://example.com/' viewerCountry = headers.get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'TW': url = 'https://tw.example.com/' elif countryCode == 'US': url = 'https://us.example.com/' response = { 'status': '302', 'statusDescription': 'Found', 'headers': { 'location': [ { 'key': 'Location', 'value': url }] } } return response Ejemplo: Envío de distintas versiones de un objeto en función del dispositivo El siguiente ejemplo muestra cómo ofrecer distintas versiones de un objeto en función del tipo de dispositivo que el usuario está utilizando; por ejemplo, un dispositivo móvil o una tablet. Tenga en cuenta lo siguiente: Debe configurar la distribución para almacenar en la caché en función de los encabezados CloudFront-Is-*-Viewer . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . CloudFront agrega los encabezados CloudFront-Is-*-Viewer después del evento de solicitud del lector. Para utilizar este ejemplo, debe crear un activador para el evento de solicitud al origen. Node.js 'use strict'; /* This is an origin request function */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const headers = request.headers; /* * Serve different versions of an object based on the device type. * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Is-*-Viewer headers. For more information, see * the following documentation: * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type * 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const desktopPath = '/desktop'; const mobilePath = '/mobile'; const tabletPath = '/tablet'; const smarttvPath = '/smarttv'; if (headers['cloudfront-is-desktop-viewer'] && headers['cloudfront-is-desktop-viewer'][0].value === 'true') { request.uri = desktopPath + request.uri; } else if (headers['cloudfront-is-mobile-viewer'] && headers['cloudfront-is-mobile-viewer'][0].value === 'true') { request.uri = mobilePath + request.uri; } else if (headers['cloudfront-is-tablet-viewer'] && headers['cloudfront-is-tablet-viewer'][0].value === 'true') { request.uri = tabletPath + request.uri; } else if (headers['cloudfront-is-smarttv-viewer'] && headers['cloudfront-is-smarttv-viewer'][0].value === 'true') { request.uri = smarttvPath + request.uri; } console.log(`Request uri set to "$ { request.uri}"`); callback(null, request); }; Python # This is an origin request function def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] headers = request['headers'] ''' Serve different versions of an object based on the device type. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Is-*-Viewer headers. For more information, see the following documentation: https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers https://docs.aws.amazon.com/console/cloudfront/cache-on-device-type 2. CloudFront adds the CloudFront-Is-*-Viewer headers after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' desktopPath = '/desktop'; mobilePath = '/mobile'; tabletPath = '/tablet'; smarttvPath = '/smarttv'; if 'cloudfront-is-desktop-viewer' in headers and headers['cloudfront-is-desktop-viewer'][0]['value'] == 'true': request['uri'] = desktopPath + request['uri'] elif 'cloudfront-is-mobile-viewer' in headers and headers['cloudfront-is-mobile-viewer'][0]['value'] == 'true': request['uri'] = mobilePath + request['uri'] elif 'cloudfront-is-tablet-viewer' in headers and headers['cloudfront-is-tablet-viewer'][0]['value'] == 'true': request['uri'] = tabletPath + request['uri'] elif 'cloudfront-is-smarttv-viewer' in headers and headers['cloudfront-is-smarttv-viewer'][0]['value'] == 'true': request['uri'] = smarttvPath + request['uri'] print("Request uri set to %s" % request['uri']) return request Selección de origen dinámico basada en contenido: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para el direccionamiento a diferentes orígenes en función de la información de la solicitud. Temas Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen personalizado a un origen de Amazon S3 Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar la región de origen de Amazon S3 Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen de Amazon S3 a un origen personalizado Ejemplo: Uso de un desencadenador de solicitud al origen para transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar el nombre del dominio de origen en función del encabezado de país Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen personalizado a un origen de Amazon S3 Esta función demuestra cómo utilizar un desencadenador de solicitud al origen para cambiar desde un origen personalizado a un origen de Amazon S3 desde el que recuperar el contenido, en función de las propiedades de la solicitud. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if S3 origin should be used, and * if true, sets S3 origin properties. */ const params = querystring.parse(request.querystring); if (params['useS3Origin']) { if (params['useS3Origin'] === 'true') { const s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com'; /* Set S3 origin fields */ request.origin = { s3: { domainName: s3DomainName, region: '', authMethod: 'origin-access-identity', path: '', customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: s3DomainName}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' Reads query string to check if S3 origin should be used, and if true, sets S3 origin properties ''' params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useS3Origin') == 'true': s3DomainName = 'amzn-s3-demo-bucket.s3.amazonaws.com' # Set S3 origin fields request['origin'] = { 's3': { 'domainName': s3DomainName, 'region': '', 'authMethod': 'origin-access-identity', 'path': '', 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': s3DomainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar la región de origen de Amazon S3 Esta función demuestra cómo utilizar un desencadenador de solicitud al origen para cambiar el origen de Amazon S3 desde el que se recupera el contenido, en función de las propiedades de la solicitud. En este ejemplo, utilizamos el valor del encabezado CloudFront-Viewer-Country para actualizar el nombre de dominio del bucket de S3 por un bucket de una región que está más cerca del lector. Esto puede resultar útil de varias maneras: Reduce las latencias cuando la región especificada está más cerca del país del lector. Proporciona soberanía de los datos, al asegurarse de que los datos se distribuyen desde un origen que está en el país del que provino la solicitud. Para utilizar este ejemplo, debe hacer lo siguiente: Configure la distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . Crear un disparador para esta función en el evento de solicitud al origen. CloudFront agrega el encabezado CloudFront-Viewer-Country después del evento de solicitud del lector; por lo tanto, para utilizar este ejemplo, debe asegurarse de que la función ejecuta una solicitud de origen. nota El siguiente código de ejemplo usa la misma identidad de acceso de origen (OAI) para todos los buckets de S3 que usa para el origen. Para obtener más información, consulte Identidad de acceso de origen . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * This blueprint demonstrates how an origin-request trigger can be used to * change the origin from which the content is fetched, based on request properties. * In this example, we use the value of the CloudFront-Viewer-Country header * to update the S3 bucket domain name to a bucket in a Region that is closer to * the viewer. * * This can be useful in several ways: * 1) Reduces latencies when the Region specified is nearer to the viewer's * country. * 2) Provides data sovereignty by making sure that data is served from an * origin that's in the same country that the request came from. * * NOTE: 1. You must configure your distribution to cache based on the * CloudFront-Viewer-Country header. For more information, see * https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers * 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer * request event. To use this example, you must create a trigger for the * origin request event. */ const countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' }; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; const region = countryToRegion[countryCode]; /** * If the viewer's country is not in the list you specify, the request * goes to the default S3 bucket you've configured. */ if (region) { /** * If you've set up OAI, the bucket policy in the destination bucket * should allow the OAI GetObject operation, as configured by default * for an S3 origin with OAI. Another requirement with OAI is to provide * the Region so it can be used for the SIGV4 signature. Otherwise, the * Region is not required. */ request.origin.s3.region = region; const domainName = `amzn-s3-demo-bucket-in-$ { region}.s3.$ { region}.amazonaws.com`; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName }]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] ''' This blueprint demonstrates how an origin-request trigger can be used to change the origin from which the content is fetched, based on request properties. In this example, we use the value of the CloudFront-Viewer-Country header to update the S3 bucket domain name to a bucket in a Region that is closer to the viewer. This can be useful in several ways: 1) Reduces latencies when the Region specified is nearer to the viewer's country. 2) Provides data sovereignty by making sure that data is served from an origin that's in the same country that the request came from. NOTE: 1. You must configure your distribution to cache based on the CloudFront-Viewer-Country header. For more information, see https://docs.aws.amazon.com/console/cloudfront/cache-on-selected-headers 2. CloudFront adds the CloudFront-Viewer-Country header after the viewer request event. To use this example, you must create a trigger for the origin request event. ''' countryToRegion = { 'DE': 'eu-central-1', 'IE': 'eu-west-1', 'GB': 'eu-west-2', 'FR': 'eu-west-3', 'JP': 'ap-northeast-1', 'IN': 'ap-south-1' } viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] region = countryToRegion.get(countryCode) # If the viewer's country in not in the list you specify, the request # goes to the default S3 bucket you've configured if region: ''' If you've set up OAI, the bucket policy in the destination bucket should allow the OAI GetObject operation, as configured by default for an S3 origin with OAI. Another requirement with OAI is to provide the Region so it can be used for the SIGV4 signature. Otherwise, the Region is not required. ''' request['origin']['s3']['region'] = region domainName = 'amzn-s3-demo-bucket-in- { 0}.s3. { 0}.amazonaws.com'.format(region) request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar desde un origen de Amazon S3 a un origen personalizado Esta función demuestra cómo utilizar un disparador de solicitud al origen para cambiar el origen personalizado desde el que se recupera el contenido, en función de las propiedades de la solicitud. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; /** * Reads query string to check if custom origin should be used, and * if true, sets custom origin properties. */ const params = querystring.parse(request.querystring); if (params['useCustomOrigin']) { if (params['useCustomOrigin'] === 'true') { /* Set custom origin fields*/ request.origin = { custom: { domainName: 'www.example.com', port: 443, protocol: 'https', path: '', sslProtocols: ['TLSv1', 'TLSv1.1'], readTimeout: 5, keepaliveTimeout: 5, customHeaders: { } } }; request.headers['host'] = [ { key: 'host', value: 'www.example.com'}]; } } callback(null, request); }; Python from urllib.parse import parse_qs def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] # Reads query string to check if custom origin should be used, and # if true, sets custom origin properties params = { k: v[0] for k, v in parse_qs(request['querystring']).items()} if params.get('useCustomOrigin') == 'true': # Set custom origin fields request['origin'] = { 'custom': { 'domainName': 'www.example.com', 'port': 443, 'protocol': 'https', 'path': '', 'sslProtocols': ['TLSv1', 'TLSv1.1'], 'readTimeout': 5, 'keepaliveTimeout': 5, 'customHeaders': { } } } request['headers']['host'] = [ { 'key': 'host', 'value': 'www.example.com'}] return request Ejemplo: Uso de un desencadenador de solicitud al origen para transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro Esta función demuestra cómo transferir gradualmente el tráfico desde un bucket de Amazon S3 a otro de forma controlada. Node.js 'use strict'; function getRandomInt(min, max) { /* Random number is inclusive of min and max*/ return Math.floor(Math.random() * (max - min + 1)) + min; } exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; const BLUE_TRAFFIC_PERCENTAGE = 80; /** * This Lambda function demonstrates how to gradually transfer traffic from * one S3 bucket to another in a controlled way. * We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from * 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic * is re-directed to blue-bucket. If not, the default bucket that we've configured * is used. */ const randomNumber = getRandomInt(1, 100); if (randomNumber <= BLUE_TRAFFIC_PERCENTAGE) { const domainName = 'blue-bucket.s3.amazonaws.com'; request.origin.s3.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } callback(null, request); }; Python import math import random def getRandomInt(min, max): # Random number is inclusive of min and max return math.floor(random.random() * (max - min + 1)) + min def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] BLUE_TRAFFIC_PERCENTAGE = 80 ''' This Lambda function demonstrates how to gradually transfer traffic from one S3 bucket to another in a controlled way. We define a variable BLUE_TRAFFIC_PERCENTAGE which can take values from 1 to 100. If the generated randomNumber less than or equal to BLUE_TRAFFIC_PERCENTAGE, traffic is re-directed to blue-bucket. If not, the default bucket that we've configured is used. ''' randomNumber = getRandomInt(1, 100) if randomNumber <= BLUE_TRAFFIC_PERCENTAGE: domainName = 'blue-bucket.s3.amazonaws.com' request['origin']['s3']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Ejemplo: Uso de un desencadenador de solicitud de origen para cambiar el nombre del dominio de origen en función del encabezado de país Esta función demuestra cómo cambiar el nombre del dominio de origen en función del encabezado CloudFront-Viewer-Country , de forma que el contenido se distribuya desde un origen más cercano al país del lector. La implementación de esta funcionalidad para su distribución puede tener ventajas como las siguientes: Reducir las latencias cuando la región especificada está más cerca del país del lector Proporcionar soberanía de los datos, al asegurarse de que los datos se distribuyen desde un origen que está en el país del que provino la solicitud Tenga en cuenta que para habilitar esta funcionalidad, debe configurar su distribución para almacenar en la caché en función del encabezado CloudFront-Viewer-Country . Para obtener más información, consulte Caché en función de encabezados de solicitud seleccionados . Node.js 'use strict'; exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.headers['cloudfront-viewer-country']) { const countryCode = request.headers['cloudfront-viewer-country'][0].value; if (countryCode === 'GB' || countryCode === 'DE' || countryCode === 'IE' ) { const domainName = 'eu.example.com'; request.origin.custom.domainName = domainName; request.headers['host'] = [ { key: 'host', value: domainName}]; } } callback(null, request); }; Python def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] viewerCountry = request['headers'].get('cloudfront-viewer-country') if viewerCountry: countryCode = viewerCountry[0]['value'] if countryCode == 'GB' or countryCode == 'DE' or countryCode == 'IE': domainName = 'eu.example.com' request['origin']['custom']['domainName'] = domainName request['headers']['host'] = [ { 'key': 'host', 'value': domainName}] return request Actualización de estados de error: ejemplos En los ejemplos siguientes se proporciona orientación acerca de cómo puede usar Lambda@Edge para cambiar el estado de error que se devuelve a los usuarios. Temas Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 200 Esta función demuestra cómo actualizar el estado de la respuesta a 200 y generar un cuerpo con contenido estático para devolverlo al espectador en la siguiente situación: La función se desencadena en una respuesta del origen. El estado de la respuesta del servidor de origen es un código de estado de error (4xx o 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; /** * This function updates the response status to 200 and generates static * body content to return to the viewer in the following scenario: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { response.status = 200; response.statusDescription = 'OK'; response.body = 'Body generation example'; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] ''' This function updates the response status to 200 and generates static body content to return to the viewer in the following scenario: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: response['status'] = 200 response['statusDescription'] = 'OK' response['body'] = 'Body generation example' return response Ejemplo: Uso de un desencadenador de respuesta de origen para actualizar el código de estado de error a 302 Esta función demuestra cómo actualizar el código de estado HTTP a 302 para la redirección a otra ruta (comportamiento de la caché) en la que se ha configurado un origen diferente. Tenga en cuenta lo siguiente: La función se desencadena en una respuesta del origen. El estado de la respuesta del servidor de origen es un código de estado de error (4xx o 5xx). Node.js 'use strict'; exports.handler = (event, context, callback) => { const response = event.Records[0].cf.response; const request = event.Records[0].cf.request; /** * This function updates the HTTP status code in the response to 302, to redirect to another * path (cache behavior) that has a different origin configured. Note the following: * 1. The function is triggered in an origin response * 2. The response status from the origin server is an error status code (4xx or 5xx) */ if (response.status >= 400 && response.status <= 599) { const redirect_path = `/plan-b/path?$ { request.querystring}`; response.status = 302; response.statusDescription = 'Found'; /* Drop the body, as it is not required for redirects */ response.body = ''; response.headers['location'] = [ { key: 'Location', value: redirect_path }]; } callback(null, response); }; Python def lambda_handler(event, context): response = event['Records'][0]['cf']['response'] request = event['Records'][0]['cf']['request'] ''' This function updates the HTTP status code in the response to 302, to redirect to another path (cache behavior) that has a different origin configured. Note the following: 1. The function is triggered in an origin response 2. The response status from the origin server is an error status code (4xx or 5xx) ''' if int(response['status']) >= 400 and int(response['status']) <= 599: redirect_path = '/plan-b/path?%s' % request['querystring'] response['status'] = 302 response['statusDescription'] = 'Found' # Drop the body as it is not required for redirects response['body'] = '' response['headers']['location'] = [ { 'key': 'Location', 'value': redirect_path}] return response Acceso al cuerpo de la solicitud: ejemplos En los ejemplos siguientes se muestra cómo puede usar Lambda@Edge para trabajar con las solicitudes POST. nota Para utilizar estos ejemplos, debe habilitar la opción incluir cuerpo en la asociación de funciones Lambda de la distribución. No está habilitada de forma predeterminada. Para habilitar esta configuración en la consola de CloudFront, seleccione la casilla de verificación Incluir cuerpo en la Asociación de funciones Lambda . Para habilitar esta configuración en la API de CloudFront o con CloudFormation, establezca el campo IncludeBody en true en LambdaFunctionAssociation . Temas Ejemplo: Uso de un desencadenador de solicitud para leer un formulario HTML Ejemplo: Uso de un desencadenador de solicitud para modificar un formulario HTML Ejemplo: Uso de un desencadenador de solicitud para leer un formulario HTML Esta función ilustra cómo puede procesar el cuerpo de una solicitud POST generada por un formulario HTML (formulario web), como por ejemplo un formulario tipo "póngase en contacto con nosotros". Por ejemplo, es posible que tenga un formulario HTML como el siguiente: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> Para la función de ejemplo que se indica a continuación, la función se debe desencadenar en una solicitud al origen o del lector de CloudFront. Node.js 'use strict'; const querystring = require('querystring'); /** * This function demonstrates how you can read the body of a POST request * generated by an HTML form (web form). The function is triggered in a * CloudFront viewer request or origin request event type. */ exports.handler = (event, context, callback) => { const request = event.Records[0].cf.request; if (request.method === 'POST') { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send the data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we only log the form fields here. * You can put your custom logic here. For example, you can store the * fields in a database, such as Amazon DynamoDB, and generate a response * right from your Lambda@Edge function. */ for (let param in params) { console.log(`For "$ { param}" user submitted "$ { params[param]}".\n`); } } return callback(null, request); }; Python import base64 from urllib.parse import parse_qs ''' Say there is a POST request body generated by an HTML such as: <html> <form action="https://example.com" method="post"> Param 1: <input type="text" name="name1"><br> Param 2: <input type="text" name="name2"><br> input type="submit" value="Submit"> </form> </html> ''' ''' This function demonstrates how you can read the body of a POST request generated by an HTML form (web form). The function is triggered in a CloudFront viewer request or origin request event type. ''' def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': # HTTP body is always passed as base64-encoded string. Decode it body = base64.b64decode(request['body']['data']) # HTML forms send the data in query string format. Parse it params = { k: v[0] for k, v in parse_qs(body).items()} ''' For demonstration purposes, we only log the form fields here. You can put your custom logic here. For example, you can store the fields in a database, such as Amazon DynamoDB, and generate a response right from your Lambda@Edge function. ''' for key, value in params.items(): print("For %s use submitted %s" % (key, value)) return request Ejemplo: Uso de un desencadenador de solicitud para modificar un formulario HTML Esta función ilustra cómo puede modificar el cuerpo de una solicitud POST generada por un formulario HTML (formulario web). La función se activa en una solicitud al origen o del lector de CloudFront. Node.js 'use strict'; const querystring = require('querystring'); exports.handler = (event, context, callback) => { var request = event.Records[0].cf.request; if (request.method === 'POST') { /* Request body is being replaced. To do this, update the following /* three fields: * 1) body.action to 'replace' * 2) body.encoding to the encoding of the new data. * * Set to one of the following values: * * text - denotes that the generated body is in text format. * Lambda@Edge will propagate this as is. * base64 - denotes that the generated body is base64 encoded. * Lambda@Edge will base64 decode the data before sending * it to the origin. * 3) body.data to the new body. */ request.body.action = 'replace'; request.body.encoding = 'text'; request.body.data = getUpdatedBody(request); } callback(null, request); }; function getUpdatedBody(request) { /* HTTP body is always passed as base64-encoded string. Decode it. */ const body = Buffer.from(request.body.data, 'base64').toString(); /* HTML forms send data in query string format. Parse it. */ const params = querystring.parse(body); /* For demonstration purposes, we're adding one more param. * * You can put your custom logic here. For example, you can truncate long * bodies from malicious requests. */ params['new-param-name'] = 'new-param-value'; return querystring.stringify(params); } Python import base64 from urllib.parse import parse_qs, urlencode def lambda_handler(event, context): request = event['Records'][0]['cf']['request'] if request['method'] == 'POST': ''' Request body is being replaced. To do this, update the following three fields: 1) body.action to 'replace' 2) body.encoding to the encoding of the new data. Set to one of the following values: text - denotes that the generated body is in text format. Lambda@Edge will propagate this as is. base64 - denotes that the generated body is base64 encoded. Lambda@Edge will base64 decode the data before sending it to the origin. 3) body.data to the new body. ''' request['body']['action'] = 'replace' request['body']['encoding'] = 'text' request['body']['data'] | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#with-s3-tutorial-test-image | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#with-s3-example-prereqs | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://m.youtube.com/watch?v=3vKHVjM22rs | - YouTube 정보 보도자료 저작권 문의하기 크리에이터 광고 개발자 약관 개인정보처리방침 정책 및 안전 YouTube 작동의 원리 새로운 기능 테스트하기 © 2026 Google LLC, Sundar Pichai, 1600 Amphitheatre Parkway, Mountain View CA 94043, USA, 0807-882-594 (무료), yt-support-solutions-kr@google.com, 호스팅: Google LLC, 사업자정보 , 불법촬영물 신고 크리에이터들이 유튜브 상에 게시, 태그 또는 추천한 상품들은 판매자들의 약관에 따라 판매됩니다. 유튜브는 이러한 제품들을 판매하지 않으며, 그에 대한 책임을 지지 않습니다. | 2026-01-13T09:30:35 |
https://young-programmers.blogspot.com/2013/06/ | Young Programmers Podcast: June 2013 skip to main | skip to sidebar Young Programmers Podcast A video podcast for computer programmers in grades 3 and up. We learn about Scratch, Tynker, Alice, Python, Pygame, and Scala, and interview interesting programmers. From professional software developer and teacher Dave Briccetti, and many special guests. Viewing the Videos or Subscribing to the Podcast Some of the entries have a picture, which you can click to access the video. Otherwise, to see the videos, use this icon to subscribe to or view the feed: Or, subscribe in iTunes Wednesday, June 26, 2013 A Simple Python Quiz Program, Part 3 We enhance the quiz program by reading the questions and answers from a file. Source code at 11:49 PM Labels: python Wednesday, June 12, 2013 Tynker—Physics and Cloning Let’s look at two very nice features of Tynker. The physics feature allows you to have falling objects and accurate projectile motion, combined with adjustments for gravity, friction, density, gravity, and much more. Cloning lets you have multiple instances of actors (like Scratch sprites), without having to duplicate code. at 6:38 PM Labels: physics , teaching , Tynker Sunday, June 2, 2013 Simple Python Calculator using a Dictionary and Functions for Operators This lesson creates a simple calculator using a dictionary to look up functions for operators. The point is to practice more with dictionaries, and to learn about higher-order functions. Source code at 10:10 PM Labels: dictionary , higher-order functions , python Newer Posts Older Posts Home Subscribe to: Comments (Atom) About Me Dave Briccetti View my complete profile Where to Get Software Kojo Python Alice Scratch Other Blogs Dave Briccetti’s Blog One of My Best Classes Ever 10 years ago Tags alice (3) Android (1) arduino (1) art (1) audacity (2) dictionary (2) Flickr (1) functions (2) gamedev (1) garageband (1) GIMP (2) Google (2) guest (4) hacker (1) higher-order functions (1) inkscape (1) interview (9) Java (2) JavaFX (2) Jython (3) Kojo (2) lift (1) music (2) physics (1) platform (1) programmer (4) pygame (6) python (31) PythonCard (1) random (6) Sande (2) Scala (5) scratch (10) shdh (2) shdh34 (2) sound (3) sprite (2) Swing (3) teaching (3) twitter (2) Tynker (1) Web Services (1) xturtle (1) Followers Blog Archive ►  2015 (1) ►  February (1) ▼  2013 (4) ►  July (1) ▼  June (3) A Simple Python Quiz Program, Part 3 Tynker—Physics and Cloning Simple Python Calculator using a Dictionary and Fu... ►  2012 (2) ►  February (1) ►  January (1) ►  2011 (8) ►  November (1) ►  July (3) ►  May (1) ►  February (2) ►  January (1) ►  2010 (6) ►  October (2) ►  June (2) ►  February (2) ►  2009 (37) ►  December (4) ►  November (1) ►  September (7) ►  August (11) ►  July (14)   | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#with-s3-tutorial-dummy-test | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://young-programmers.blogspot.com/2009/07/twitters-doug-williams-visits-my.html#sidebar | Young Programmers Podcast: Twitter’s Doug Williams Visits My Programming Class skip to main | skip to sidebar Young Programmers Podcast A video podcast for computer programmers in grades 3 and up. We learn about Scratch, Tynker, Alice, Python, Pygame, and Scala, and interview interesting programmers. From professional software developer and teacher Dave Briccetti, and many special guests. Viewing the Videos or Subscribing to the Podcast Some of the entries have a picture, which you can click to access the video. Otherwise, to see the videos, use this icon to subscribe to or view the feed: Or, subscribe in iTunes Sunday, July 19, 2009 Twitter’s Doug Williams Visits My Programming Class Twitter's Doug Williams describes how he got started programming. See Twitter’s Doug Williams Visits My Programming Class : http://briccetti.blogspot.com/2009/07/twitters-doug-williams-visits-my.html at 9:13 PM Labels: guest , interview , twitter Newer Post Older Post Home About Me Dave Briccetti View my complete profile Where to Get Software Kojo Python Alice Scratch Other Blogs Dave Briccetti’s Blog One of My Best Classes Ever 10 years ago Tags alice (3) Android (1) arduino (1) art (1) audacity (2) dictionary (2) Flickr (1) functions (2) gamedev (1) garageband (1) GIMP (2) Google (2) guest (4) hacker (1) higher-order functions (1) inkscape (1) interview (9) Java (2) JavaFX (2) Jython (3) Kojo (2) lift (1) music (2) physics (1) platform (1) programmer (4) pygame (6) python (31) PythonCard (1) random (6) Sande (2) Scala (5) scratch (10) shdh (2) shdh34 (2) sound (3) sprite (2) Swing (3) teaching (3) twitter (2) Tynker (1) Web Services (1) xturtle (1) Followers Blog Archive ►  2015 (1) ►  February (1) ►  2013 (4) ►  July (1) ►  June (3) ►  2012 (2) ►  February (1) ►  January (1) ►  2011 (8) ►  November (1) ►  July (3) ►  May (1) ►  February (2) ►  January (1) ►  2010 (6) ►  October (2) ►  June (2) ►  February (2) ▼  2009 (37) ►  December (4) ►  November (1) ►  September (7) ►  August (11) ▼  July (14) Tor Norbye Shows JavaFX Recording Music for Computer Games Adding a Second Sprite to SimplePygame (Challenge 4) Solutions to SimplePygame Challenges 1–3 Xturtle With Loops to Make Polygons An Alice Object Reacts to Another Using GIMP to Make Graphics for Scratch and Alice Twitter’s Doug Williams Visits My Programming Class Random Number Problems and Python Solutions Random Numbers in Python: randint and choice Three Simple Python Problems and Their Solutions Power-Up and Shield Making Scratch Graphics with Inkscape Overview of Scratch, Alice, Python and Pygame   | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#with-s3-tutorial-create-function-package | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#cleanup | Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Documentation AWS Lambda Developer Guide Create an Amazon S3 bucket Upload a test object to your bucket Create a permissions policy Create an execution role Create the Lambda function Deploy the function code Create the Amazon S3 trigger Test the Lambda function Clean up your resources Next steps Tutorial: Using an Amazon S3 trigger to invoke a Lambda function In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs. This tutorial demonstrates how to: Create an Amazon S3 bucket. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket. Test your function, first with a dummy event, and then using the trigger. By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console. Create an Amazon S3 bucket To create an Amazon S3 bucket Open the Amazon S3 console and select the General purpose buckets page. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region. Choose Create bucket . Under General configuration , do the following: For Bucket type , ensure General purpose is selected. For Bucket name , enter a globally unique name that meets the Amazon S3 Bucket naming rules . Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-). Leave all other options set to their default values and choose Create bucket . Upload a test object to your bucket To upload a test object Open the Buckets page of the Amazon S3 console and choose the bucket you created during the previous step. Choose Upload . Choose Add files and select the object that you want to upload. You can select any file (for example, HappyFace.jpg ). Choose Open , then choose Upload . Later in the tutorial, you’ll test your Lambda function using this object. Create a permissions policy Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. To create the policy Open the Policies page of the IAM console. Choose Create Policy . Choose the JSON tab, and then paste the following custom policy into the JSON editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choose Next: Tags . Choose Next: Review . Under Review policy , for the policy Name , enter s3-trigger-tutorial . Choose Create policy . Create an execution role An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step. To create an execution role and attach your custom permissions policy Open the Roles page of the IAM console. Choose Create role . For the type of trusted entity, choose AWS service , then for the use case, choose Lambda . Choose Next . In the policy search box, enter s3-trigger-tutorial . In the search results, select the policy that you created ( s3-trigger-tutorial ), and then choose Next . Under Role details , for the Role name , enter lambda-s3-trigger-role , then choose Create role . Create the Lambda function Create a Lambda function in the console using the Python 3.14 runtime. To create the Lambda function Open the Functions page of the Lambda console. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen. Choose Create function . Choose Author from scratch Under Basic information , do the following: For Function name , enter s3-trigger-tutorial For Runtime , choose Python 3.14 . For Architecture , choose x86_64 . In the Change default execution role tab, do the following: Expand the tab, then choose Use an existing role . Select the lambda-s3-trigger-role you created earlier. Choose Create function . Deploy the function code This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in. The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the event parameter it receives from Amazon S3. The function then uses the get_object method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object. To deploy the function code Choose the Python tab in the following box and copy the code. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consuming an S3 event with Lambda using TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } In the Code source pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created. In the DEPLOY section, choose Deploy to update your function's code: Create the Amazon S3 trigger To create the Amazon S3 trigger In the Function overview pane, choose Add trigger . Select S3 . Under Bucket , select the bucket you created earlier in the tutorial. Under Event types , be sure that All object create events is selected. Under Recursive invocation , select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. Choose Add . Note When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an event notification on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket. Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created: An error occurred when creating the trigger: Unable to validate the following destination configurations. You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies. Test your Lambda function with a dummy event To test the Lambda function with a dummy event In the Lambda console page for your function, choose the Test tab. For Event name , enter MyTestEvent . In the Event JSON , paste the following test event. Be sure to replace these values: Replace us-east-1 with the region you created your Amazon S3 bucket in. Replace both instances of amzn-s3-demo-bucket with the name of your own Amazon S3 bucket. Replace test%2FKey with the name of the test object you uploaded to your bucket earlier (for example, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choose Save . Choose Test . If your function runs successfully, you’ll see output similar to the following in the Execution results tab. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test the Lambda function with the Amazon S3 trigger To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output. To upload an object to your Amazon S3 bucket Open the Buckets page of the Amazon S3 console and choose the bucket that you created earlier. Choose Upload . Choose Add files and use the file selector to choose an object you want to upload. This object can be any file you choose. Choose Open , then choose Upload . To verify the function invocation using CloudWatch Logs Open the CloudWatch console. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen. Choose Logs , then choose Log groups . Choose the log group for your function ( /aws/lambda/s3-trigger-tutorial ). Under Log streams , choose the most recent log stream. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The CONTENT TYPE you see depends on the type of file you uploaded to your bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions , Delete . Type confirm in the text input field and choose Delete . To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete . Enter the name of the role in the text input field and choose Delete . To delete the S3 bucket Open the Amazon S3 console. Select the bucket you created. Choose Delete . Enter the name of the bucket in the text input field. Choose Delete bucket . Next steps In Tutorial: Using an Amazon S3 trigger to create thumbnail images , the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions S3 Tutorial: Use an Amazon S3 trigger to create thumbnails Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/id_id/lambda/latest/dg/with-s3-tutorial.html#s3-tutorial-cleanup | Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail - AWS Lambda Dokumentasi AWS Lambda Panduan Developerr Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Terjemahan disediakan oleh mesin penerjemah. Jika konten terjemahan yang diberikan bertentangan dengan versi bahasa Inggris aslinya, utamakan versi bahasa Inggris. Tutorial: Menggunakan pemicu Amazon S3 untuk membuat gambar thumbnail Dalam tutorial ini, Anda membuat dan mengonfigurasi fungsi Lambda yang mengubah ukuran gambar yang ditambahkan ke bucket Amazon Simple Storage Service (Amazon S3). Saat Anda menambahkan file gambar ke bucket, Amazon S3 akan memanggil fungsi Lambda Anda. Fungsi tersebut kemudian membuat versi thumbnail gambar dan mengeluarkannya ke bucket Amazon S3 yang berbeda. Untuk menyelesaikan tutorial ini, Anda melakukan langkah-langkah berikut: Buat bucket Amazon S3 sumber dan tujuan dan unggah gambar sampel. Buat fungsi Lambda yang mengubah ukuran gambar dan mengeluarkan thumbnail ke bucket Amazon S3. Konfigurasikan pemicu Lambda yang memanggil fungsi Anda saat objek diunggah ke bucket sumber Anda. Uji fungsi Anda, pertama dengan acara dummy, lalu dengan mengunggah gambar ke bucket sumber Anda. Dengan menyelesaikan langkah-langkah ini, Anda akan mempelajari cara menggunakan Lambda untuk menjalankan tugas pemrosesan file pada objek yang ditambahkan ke bucket Amazon S3. Anda dapat menyelesaikan tutorial ini menggunakan AWS Command Line Interface (AWS CLI) atau Konsol Manajemen AWS. Jika Anda mencari contoh sederhana untuk mempelajari cara mengonfigurasi pemicu Amazon S3 untuk Lambda, Anda dapat mencoba Tutorial: Menggunakan pemicu Amazon S3 untuk menjalankan fungsi Lambda. Topik Prasyarat Buat dua ember Amazon S3 Unggah gambar uji ke bucket sumber Anda Membuat kebijakan izin Membuat peran eksekusi Buat paket penerapan fungsi Buat fungsi Lambda Konfigurasikan Amazon S3 untuk menjalankan fungsi Uji fungsi Lambda Anda dengan acara dummy Uji fungsi Anda menggunakan pemicu Amazon S3 Bersihkan sumber daya Anda Prasyarat Jika Anda ingin menggunakan AWS CLI untuk menyelesaikan tutorial, instal versi terbaru dari AWS Command Line Interface . Untuk kode fungsi Lambda Anda, Anda dapat menggunakan Python atau Node.js. Instal alat dukungan bahasa dan manajer paket untuk bahasa yang ingin Anda gunakan. Jika Anda belum menginstal AWS Command Line Interface, ikuti langkah-langkah di Menginstal atau memperbarui versi terbaru AWS CLI untuk menginstalnya . Tutorial ini membutuhkan terminal baris perintah atau shell untuk menjalankan perintah. Di Linux dan macOS, gunakan shell dan manajer paket pilihan Anda. catatan Di Windows, beberapa perintah Bash CLI yang biasa Anda gunakan dengan Lambda ( zip seperti) tidak didukung oleh terminal bawaan sistem operasi. Untuk mendapatkan versi terintegrasi Windows dari Ubuntu dan Bash, instal Windows Subsystem untuk Linux. Buat dua ember Amazon S3 Pertama buat dua ember Amazon S3. Bucket pertama adalah bucket sumber tempat Anda akan mengunggah gambar Anda. Bucket kedua digunakan oleh Lambda untuk menyimpan thumbnail yang diubah ukurannya saat Anda menjalankan fungsi. Konsol Manajemen AWS Untuk membuat bucket Amazon S3 (konsol) Buka konsol Amazon S3 dan pilih halaman Bucket tujuan umum . Pilih yang Wilayah AWS paling dekat dengan lokasi geografis Anda. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda di Wilayah yang sama. Pilih Buat bucket . Pada Konfigurasi umum , lakukan hal berikut: Untuk jenis Bucket , pastikan Tujuan umum dipilih. Untuk nama Bucket , masukkan nama unik global yang memenuhi aturan penamaan Amazon S3 Bucket . Nama bucket hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Biarkan semua opsi lain disetel ke nilai defaultnya dan pilih Buat bucket . Ulangi langkah 1 hingga 5 untuk membuat bucket tujuan Anda. Untuk nama Bucket amzn-s3-demo-source-bucket-resized , masukkan, di amzn-s3-demo-source-bucket mana nama bucket sumber yang baru saja Anda buat. AWS CLI Untuk membuat bucket Amazon S3 ()AWS CLI Jalankan perintah CLI berikut untuk membuat bucket sumber Anda. Nama yang Anda pilih untuk bucket Anda harus unik secara global dan ikuti aturan penamaan Amazon S3 Bucket . Nama hanya dapat berisi huruf kecil, angka, titik (.), dan tanda hubung (-). Untuk region dan LocationConstraint , pilih yang paling Wilayah AWS dekat dengan lokasi geografis Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Kemudian dalam tutorial, Anda harus membuat fungsi Lambda Anda Wilayah AWS sama dengan bucket sumber Anda, jadi catat wilayah yang Anda pilih. Jalankan perintah berikut untuk membuat bucket tujuan Anda. Untuk nama bucket, Anda harus menggunakan amzn-s3-demo-source-bucket-resized , di amzn-s3-demo-source-bucket mana nama bucket sumber yang Anda buat di langkah 1. Untuk region dan LocationConstraint , pilih yang sama dengan yang Wilayah AWS Anda gunakan untuk membuat bucket sumber Anda. aws s3api create-bucket --bucket amzn-s3-demo-source-bucket-resized --region us-east-1 \ --create-bucket-configuration LocationConstraint= us-east-1 Unggah gambar uji ke bucket sumber Anda Kemudian dalam tutorial, Anda akan menguji fungsi Lambda Anda dengan memanggilnya menggunakan atau konsol Lambda. AWS CLI Untuk mengonfirmasi bahwa fungsi Anda beroperasi dengan benar, bucket sumber Anda harus berisi gambar uji. Gambar ini dapat berupa file JPG atau PNG yang Anda pilih. Konsol Manajemen AWS Untuk mengunggah gambar uji ke bucket sumber Anda (konsol) Buka halaman Bucket konsol Amazon S3. Pilih bucket sumber yang Anda buat di langkah sebelumnya. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih objek yang ingin Anda unggah. Pilih Buka , lalu pilih Unggah . AWS CLI Untuk mengunggah gambar uji ke bucket sumber Anda (AWS CLI) Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key HappyFace.jpg --body ./HappyFace.jpg Membuat kebijakan izin Langkah pertama dalam membuat fungsi Lambda Anda adalah membuat kebijakan izin. Kebijakan ini memberi fungsi Anda izin yang diperlukan untuk mengakses AWS sumber daya lain. Untuk tutorial ini, kebijakan memberikan izin baca dan tulis Lambda untuk bucket Amazon S3 dan memungkinkannya untuk menulis ke Amazon Log. CloudWatch Konsol Manajemen AWS Untuk membuat kebijakan (konsol) Buka halaman Kebijakan konsol AWS Identity and Access Management (IAM). Pilih Buat kebijakan . Pilih tab JSON , lalu tempelkan kebijakan khusus berikut ke editor JSON. { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Pilih Berikutnya . Di bawah Detail kebijakan , untuk nama Kebijakan , masukkan LambdaS3Policy . Pilih Buat kebijakan . AWS CLI Untuk membuat kebijakan (AWS CLI) Simpan JSON berikut dalam file bernama policy.json . { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" }, { "Effect": "Allow", "Action": [ "s3:PutObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan JSON, jalankan perintah CLI berikut. aws iam create-policy --policy-name LambdaS3Policy --policy-document file://policy.json Membuat peran eksekusi Peran eksekusi adalah peran IAM yang memberikan izin fungsi Lambda untuk mengakses dan sumber daya. Layanan AWS Untuk memberikan akses baca dan tulis fungsi ke bucket Amazon S3, Anda melampirkan kebijakan izin yang Anda buat di langkah sebelumnya. Konsol Manajemen AWS Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda (konsol) Buka halaman Peran konsol (IAM). Pilih Buat peran . Untuk jenis entitas Tepercaya , pilih Layanan AWS , dan untuk kasus Penggunaan , pilih Lambda . Pilih Berikutnya . Tambahkan kebijakan izin yang Anda buat di langkah sebelumnya dengan melakukan hal berikut: Dalam kotak pencarian kebijakan, masukkan LambdaS3Policy . Dalam hasil pencarian, pilih kotak centang untuk LambdaS3Policy . Pilih Berikutnya . Di bawah Rincian peran , untuk nama Peran masuk LambdaS3Role . Pilih Buat peran . AWS CLI Untuk membuat peran eksekusi dan melampirkan kebijakan izin Anda ()AWS CLI Simpan JSON berikut dalam file bernama trust-policy.json . Kebijakan kepercayaan ini memungkinkan Lambda untuk menggunakan izin peran dengan memberikan lambda.amazonaws.com izin utama layanan untuk memanggil tindakan AWS Security Token Service ()AWS STS. AssumeRole { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Principal": { "Service": "lambda.amazonaws.com" }, "Action": "sts:AssumeRole" } ] } Dari direktori tempat Anda menyimpan dokumen kebijakan kepercayaan JSON, jalankan perintah CLI berikut untuk membuat peran eksekusi. aws iam create-role --role-name LambdaS3Role --assume-role-policy-document file://trust-policy.json Untuk melampirkan kebijakan izin yang Anda buat pada langkah sebelumnya, jalankan perintah CLI berikut. Ganti Akun AWS nomor di ARN polis dengan nomor akun Anda sendiri. aws iam attach-role-policy --role-name LambdaS3Role --policy-arn arn:aws:iam:: 123456789012 :policy/LambdaS3Policy Buat paket penerapan fungsi Untuk membuat fungsi Anda, Anda membuat paket deployment yang berisi kode fungsi dan dependensinya. Untuk CreateThumbnail fungsi ini, kode fungsi Anda menggunakan pustaka terpisah untuk mengubah ukuran gambar. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat paket penyebaran yang berisi pustaka yang diperlukan. Node.js Untuk membuat paket penyebaran (Node.js) Buat direktori bernama lambda-s3 untuk kode fungsi dan dependensi Anda dan navigasikan ke dalamnya. mkdir lambda-s3 cd lambda-s3 Buat proyek Node.js baru dengan npm . Untuk menerima opsi default yang disediakan dalam pengalaman interaktif, tekan Enter . npm init Simpan kode fungsi berikut dalam file bernama index.mjs . Pastikan untuk mengganti us-east-1 dengan Wilayah AWS di mana Anda membuat ember sumber dan tujuan Anda sendiri. // dependencies import { S3Client, GetObjectCommand, PutObjectCommand } from '@aws-sdk/client-s3'; import { Readable } from 'stream'; import sharp from 'sharp'; import util from 'util'; // create S3 client const s3 = new S3Client( { region: 'us-east-1' }); // define the handler function export const handler = async (event, context) => { // Read options from the event parameter and get the source bucket console.log("Reading options from event:\n", util.inspect(event, { depth: 5})); const srcBucket = event.Records[0].s3.bucket.name; // Object key may have spaces or unicode non-ASCII characters const srcKey = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, " ")); const dstBucket = srcBucket + "-resized"; const dstKey = "resized-" + srcKey; // Infer the image type from the file suffix const typeMatch = srcKey.match(/\.([^.]*)$/); if (!typeMatch) { console.log("Could not determine the image type."); return; } // Check that the image type is supported const imageType = typeMatch[1].toLowerCase(); if (imageType != "jpg" && imageType != "png") { console.log(`Unsupported image type: $ { imageType}`); return; } // Get the image from the source bucket. GetObjectCommand returns a stream. try { const params = { Bucket: srcBucket, Key: srcKey }; var response = await s3.send(new GetObjectCommand(params)); var stream = response.Body; // Convert stream to buffer to pass to sharp resize function. if (stream instanceof Readable) { var content_buffer = Buffer.concat(await stream.toArray()); } else { throw new Error('Unknown object stream type'); } } catch (error) { console.log(error); return; } // set thumbnail width. Resize will set the height automatically to maintain aspect ratio. const width = 200; // Use the sharp module to resize the image and save in a buffer. try { var output_buffer = await sharp(content_buffer).resize(width).toBuffer(); } catch (error) { console.log(error); return; } // Upload the thumbnail image to the destination bucket try { const destparams = { Bucket: dstBucket, Key: dstKey, Body: output_buffer, ContentType: "image" }; const putResult = await s3.send(new PutObjectCommand(destparams)); } catch (error) { console.log(error); return; } console.log('Successfully resized ' + srcBucket + '/' + srcKey + ' and uploaded to ' + dstBucket + '/' + dstKey); }; Di lambda-s3 direktori Anda, instal perpustakaan tajam menggunakan npm. Perhatikan bahwa versi terbaru dari sharp (0.33) tidak kompatibel dengan Lambda. Instal versi 0.32.6 untuk menyelesaikan tutorial ini. npm install sharp@0.32.6 install Perintah npm membuat node_modules direktori untuk modul Anda. Setelah langkah ini, struktur direktori Anda akan terlihat seperti berikut. lambda-s3 |- index.mjs |- node_modules | |- base64js | |- bl | |- buffer ... |- package-lock.json |- package.json Buat paket deployment .zip yang berisi kode fungsi Anda dan dependensinya. Di macOS dan Linux, jalankan perintah berikut. zip -r function.zip . Di Windows, gunakan utilitas zip pilihan Anda untuk membuat file.zip. Pastikan bahwa package-lock.json file index.mjs package.json ,, dan node_modules direktori Anda semuanya berada di root file.zip Anda. Python Untuk membuat paket penyebaran (Python) Simpan kode contoh sebagai file bernama lambda_function.py . import boto3 import os import sys import uuid from urllib.parse import unquote_plus from PIL import Image import PIL.Image s3_client = boto3.client('s3') def resize_image(image_path, resized_path): with Image.open(image_path) as image: image.thumbnail(tuple(x / 2 for x in image.size)) image.save(resized_path) def lambda_handler(event, context): for record in event['Records']: bucket = record['s3']['bucket']['name'] key = unquote_plus(record['s3']['object']['key']) tmpkey = key.replace('/', '') download_path = '/tmp/ { } { }'.format(uuid.uuid4(), tmpkey) upload_path = '/tmp/resized- { }'.format(tmpkey) s3_client.download_file(bucket, key, download_path) resize_image(download_path, upload_path) s3_client.upload_file(upload_path, ' { }-resized'.format(bucket), 'resized- { }'.format(key)) Di direktori yang sama di mana Anda membuat lambda_function.py file Anda, buat direktori baru bernama package dan instal pustaka Pillow (PIL) dan AWS SDK untuk Python (Boto3). Meskipun runtime Lambda Python menyertakan versi Boto3 SDK, kami menyarankan agar Anda menambahkan semua dependensi fungsi Anda ke paket penerapan Anda, meskipun mereka disertakan dalam runtime. Untuk informasi selengkapnya, lihat Dependensi runtime dengan Python. mkdir package pip install \ --platform manylinux2014_x86_64 \ --target=package \ --implementation cp \ --python-version 3.12 \ --only-binary=:all: --upgrade \ pillow boto3 Pustaka Pillow berisi kode C/C ++. Dengan menggunakan --only-binary=:all: opsi --platform manylinux_2014_x86_64 dan, pip akan mengunduh dan menginstal versi Pillow yang berisi binari pra-kompilasi yang kompatibel dengan sistem operasi Amazon Linux 2. Ini memastikan bahwa paket penerapan Anda akan berfungsi di lingkungan eksekusi Lambda, terlepas dari sistem operasi dan arsitektur mesin build lokal Anda. Buat file.zip yang berisi kode aplikasi Anda dan pustaka Pillow dan Boto3. Di Linux atau macOS, jalankan perintah berikut dari antarmuka baris perintah Anda. cd package zip -r ../lambda_function.zip . cd .. zip lambda_function.zip lambda_function.py Di Windows, gunakan alat zip pilihan Anda untuk membuat file lambda_function.zip . Pastikan bahwa lambda_function.py file Anda dan folder yang berisi dependensi Anda semuanya berada di root file.zip. Anda juga dapat membuat paket deployment menggunakan lingkungan virtual Python. Lihat Bekerja dengan arsip file.zip untuk fungsi Python Lambda Buat fungsi Lambda Anda dapat membuat fungsi Lambda menggunakan konsol Lambda AWS CLI atau Lambda. Ikuti instruksi untuk bahasa yang Anda pilih untuk membuat fungsi. Konsol Manajemen AWS Untuk membuat fungsi (konsol) Untuk membuat fungsi Lambda Anda menggunakan konsol, pertama-tama Anda membuat fungsi dasar yang berisi beberapa kode 'Hello world'. Anda kemudian mengganti kode ini dengan kode fungsi Anda sendiri dengan mengunggah file the.zip atau JAR yang Anda buat pada langkah sebelumnya. Buka halaman Fungsi di konsol Lambda. Pastikan Anda bekerja di tempat yang sama dengan saat Wilayah AWS Anda membuat bucket Amazon S3. Anda dapat mengubah wilayah Anda menggunakan daftar drop-down di bagian atas layar. Pilih Buat fungsi . Pilih Penulis dari scratch . Di bagian Informasi dasar , lakukan hal berikut: Untuk Nama fungsi , masukkan CreateThumbnail . Untuk Runtime , pilih Node.js 22.x atau Python 3.12 sesuai dengan bahasa yang Anda pilih untuk fungsi Anda. Untuk Arsitektur , pilih x86_64 . Di tab Ubah peran eksekusi default , lakukan hal berikut: Perluas tab, lalu pilih Gunakan peran yang ada . Pilih yang LambdaS3Role Anda buat sebelumnya. Pilih Buat fungsi . Untuk mengunggah kode fungsi (konsol) Di panel Sumber kode , pilih Unggah dari . Pilih file.zip . Pilih Unggah . Di pemilih file, pilih file.zip Anda dan pilih Buka. Pilih Simpan . AWS CLI Untuk membuat fungsi (AWS CLI) Jalankan perintah CLI untuk bahasa yang Anda pilih. Untuk role parameter, pastikan untuk mengganti 123456789012 dengan Akun AWS ID Anda sendiri. Untuk region parameternya, ganti us-east-1 dengan wilayah tempat Anda membuat bucket Amazon S3. Untuk Node.js , jalankan perintah berikut dari direktori yang berisi function.zip file Anda. aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://function.zip --handler index.handler --runtime nodejs24.x \ --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Untuk Python , jalankan perintah berikut dari direktori yang berisi file Anda lambda_function.zip . aws lambda create-function --function-name CreateThumbnail \ --zip-file fileb://lambda_function.zip --handler lambda_function.lambda_handler \ --runtime python3.14 --timeout 10 --memory-size 1024 \ --role arn:aws:iam:: 123456789012 :role/LambdaS3Role --region us-east-1 Konfigurasikan Amazon S3 untuk menjalankan fungsi Agar fungsi Lambda dapat berjalan saat mengunggah gambar ke bucket sumber, Anda perlu mengonfigurasi pemicu untuk fungsi Anda. Anda dapat mengonfigurasi pemicu Amazon S3 menggunakan konsol atau. AWS CLI penting Prosedur ini mengonfigurasi bucket Amazon S3 untuk menjalankan fungsi Anda setiap kali objek dibuat di bucket. Pastikan untuk mengonfigurasi ini hanya di bucket sumber. Jika fungsi Lambda Anda membuat objek dalam bucket yang sama yang memanggilnya, fungsi Anda dapat dipanggil terus menerus dalam satu loop. Hal ini dapat mengakibatkan biaya yang tidak diharapkan ditagih ke Anda Akun AWS. Konsol Manajemen AWS Untuk mengonfigurasi pemicu Amazon S3 (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih Tambahkan pemicu . Pilih S3 . Di bawah Bucket , pilih bucket sumber Anda. Di bawah Jenis acara , pilih Semua objek membuat acara . Di bawah Pemanggilan rekursif , pilih kotak centang untuk mengetahui bahwa tidak disarankan menggunakan bucket Amazon S3 yang sama untuk input dan output. Anda dapat mempelajari lebih lanjut tentang pola pemanggilan rekursif di Lambda dengan membaca pola rekursif yang menyebabkan fungsi Lambda yang tidak terkendali di Tanah Tanpa Server. Pilih Tambahkan . Saat Anda membuat pemicu menggunakan konsol Lambda, Lambda secara otomatis membuat kebijakan berbasis sumber daya untuk memberikan layanan yang Anda pilih izin untuk menjalankan fungsi Anda. AWS CLI Untuk mengonfigurasi pemicu Amazon S3 ()AWS CLI Agar bucket sumber Amazon S3 menjalankan fungsi saat menambahkan file gambar, pertama-tama Anda harus mengonfigurasi izin untuk fungsi menggunakan kebijakan berbasis sumber daya. Pernyataan kebijakan berbasis sumber daya memberikan Layanan AWS izin lain untuk menjalankan fungsi Anda. Untuk memberikan izin Amazon S3 untuk menjalankan fungsi Anda, jalankan perintah CLI berikut. Pastikan untuk mengganti source-account parameter dengan Akun AWS ID Anda sendiri dan menggunakan nama bucket sumber Anda sendiri. aws lambda add-permission --function-name CreateThumbnail \ --principal s3.amazonaws.com --statement-id s3invoke --action "lambda:InvokeFunction" \ --source-arn arn:aws:s3::: amzn-s3-demo-source-bucket \ --source-account 123456789012 Kebijakan yang Anda tetapkan dengan perintah ini memungkinkan Amazon S3 untuk menjalankan fungsi Anda hanya ketika tindakan dilakukan di bucket sumber Anda. catatan Meskipun nama bucket Amazon S3 unik secara global, saat menggunakan kebijakan berbasis sumber daya, praktik terbaik adalah menentukan bahwa bucket harus menjadi milik akun Anda. Ini karena jika Anda menghapus bucket, Anda dapat membuat bucket dengan Amazon Resource Name (ARN) yang sama. Akun AWS Simpan JSON berikut dalam file bernama notification.json . Saat diterapkan ke bucket sumber Anda, JSON ini mengonfigurasi bucket untuk mengirim notifikasi ke fungsi Lambda Anda setiap kali objek baru ditambahkan. Ganti Akun AWS nomor dan Wilayah AWS dalam fungsi Lambda ARN dengan nomor akun dan wilayah Anda sendiri. { "LambdaFunctionConfigurations": [ { "Id": "CreateThumbnailEventConfiguration", "LambdaFunctionArn": "arn:aws:lambda: us-east-1:123456789012 :function:CreateThumbnail", "Events": [ "s3:ObjectCreated:Put" ] } ] } Jalankan perintah CLI berikut untuk menerapkan pengaturan notifikasi dalam file JSON yang Anda buat ke bucket sumber Anda. Ganti amzn-s3-demo-source-bucket dengan nama bucket sumber Anda sendiri. aws s3api put-bucket-notification-configuration --bucket amzn-s3-demo-source-bucket \ --notification-configuration file://notification.json Untuk mempelajari lebih lanjut tentang put-bucket-notification-configuration perintah dan notification-configuration opsi, lihat put-bucket-notification-configuration di Referensi Perintah AWS CLI . Uji fungsi Lambda Anda dengan acara dummy Sebelum menguji seluruh penyiapan dengan menambahkan file gambar ke bucket sumber Amazon S3, Anda menguji apakah fungsi Lambda berfungsi dengan benar dengan memanggilnya dengan acara dummy. Peristiwa di Lambda adalah dokumen berformat JSON yang berisi data untuk diproses fungsi Anda. Saat fungsi Anda dipanggil oleh Amazon S3, peristiwa yang dikirim ke fungsi berisi informasi seperti nama bucket, ARN bucket, dan kunci objek. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda dengan acara dummy (konsol) Buka halaman Fungsi konsol Lambda dan pilih fungsi Anda () CreateThumbnail . Pilih tab Uji . Untuk membuat acara pengujian, di panel acara Uji , lakukan hal berikut: Di bawah Uji tindakan peristiwa , pilih Buat acara baru . Untuk Nama peristiwa , masukkan myTestEvent . Untuk Template , pilih S3 Put . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri. Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Pilih Simpan . Di panel acara Uji , pilih Uji . Untuk memeriksa fungsi Anda telah membuat verison yang diubah ukurannya dari gambar Anda dan menyimpannya di bucket Amazon S3 target Anda, lakukan hal berikut: Buka halaman Bucket konsol Amazon S3. Pilih bucket target Anda dan konfirmasikan bahwa file yang diubah ukurannya tercantum di panel Objects . AWS CLI Untuk menguji fungsi Lambda Anda dengan acara dummy ()AWS CLI Simpan JSON berikut dalam file bernama dummyS3Event.json . Ganti nilai untuk parameter berikut dengan nilai Anda sendiri: Untuk awsRegion , ganti us-east-1 dengan bucket Amazon S3 yang Wilayah AWS Anda buat. Untuk name , ganti amzn-s3-demo-bucket dengan nama bucket sumber Amazon S3 Anda sendiri. Untuk key , ganti test%2Fkey dengan nama file objek pengujian yang Anda unggah ke bucket sumber di langkah tersebut. Unggah gambar uji ke bucket sumber Anda { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": "us-east-1" , "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": "amzn-s3-demo-bucket" , "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3:::amzn-s3-demo-bucket" }, "object": { "key": "test%2Fkey" , "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Dari direktori tempat Anda menyimpan dummyS3Event.json file Anda, panggil fungsi dengan menjalankan perintah CLI berikut. Perintah ini memanggil fungsi Lambda Anda secara sinkron dengan RequestResponse menentukan sebagai nilai parameter tipe pemanggilan. Untuk mempelajari lebih lanjut tentang pemanggilan sinkron dan asinkron, lihat Memanggil fungsi Lambda. aws lambda invoke --function-name CreateThumbnail \ --invocation-type RequestResponse --cli-binary-format raw-in-base64-out \ --payload file://dummyS3Event.json outputfile.txt cli-binary-formatOpsi ini diperlukan jika Anda menggunakan versi 2 dari AWS CLI. Untuk menjadikan ini pengaturan default, jalankan aws configure set cli-binary-format raw-in-base64-out . Untuk informasi selengkapnya, lihat opsi baris perintah global yang AWS CLI didukung . Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Anda akan melihat output seperti yang berikut ini. Key Parameter menunjukkan nama file file gambar Anda yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-06T21:40:07+00:00", "ETag": "\"d8ca652ffe83ba6b721ffc20d9d7174a\"", "Size": 2633, "StorageClass": "STANDARD" } ] } Uji fungsi Anda menggunakan pemicu Amazon S3 Sekarang setelah Anda mengonfirmasi bahwa fungsi Lambda Anda beroperasi dengan benar, Anda siap untuk menguji penyiapan lengkap Anda dengan menambahkan file gambar ke bucket sumber Amazon S3 Anda. Saat Anda menambahkan gambar ke bucket sumber, fungsi Lambda Anda akan dipanggil secara otomatis. Fungsi Anda membuat versi file yang diubah ukurannya dan menyimpannya di bucket target Anda. Konsol Manajemen AWS Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 (konsol) Untuk mengunggah gambar ke bucket Amazon S3 Anda, lakukan hal berikut: Buka halaman Bucket di konsol Amazon S3 dan pilih bucket sumber Anda. Pilih Unggah . Pilih Tambahkan file dan gunakan pemilih file untuk memilih file gambar yang ingin Anda unggah. Objek gambar Anda dapat berupa file.jpg atau.png. Pilih Buka , lalu pilih Unggah . Verifikasi bahwa Lambda telah menyimpan versi file gambar yang diubah ukurannya di bucket target dengan melakukan hal berikut: Arahkan kembali ke halaman Bucket di konsol Amazon S3 dan pilih bucket tujuan Anda. Di panel Objects , Anda sekarang akan melihat dua file gambar yang diubah ukurannya, satu dari setiap pengujian fungsi Lambda Anda. Untuk mengunduh gambar yang diubah ukurannya, pilih file, lalu pilih Unduh . AWS CLI Untuk menguji fungsi Lambda Anda menggunakan pemicu Amazon S3 ()AWS CLI Dari direktori yang berisi gambar yang ingin Anda unggah, jalankan perintah CLI berikut. Ganti --bucket parameter dengan nama bucket sumber Anda. Untuk --body parameter --key dan, gunakan nama file gambar pengujian Anda. Gambar uji Anda dapat berupa file.jpg atau.png. aws s3api put-object --bucket amzn-s3-demo-source-bucket --key SmileyFace.jpg --body ./SmileyFace.jpg Verifikasi bahwa fungsi Anda telah membuat versi thumbnail gambar Anda dan menyimpannya ke bucket Amazon S3 target Anda. Jalankan perintah CLI berikut, ganti amzn-s3-demo-source-bucket-resized dengan nama bucket tujuan Anda sendiri. aws s3api list-objects-v2 --bucket amzn-s3-demo-source-bucket-resized Jika fungsi Anda berjalan dengan sukses, Anda akan melihat output yang mirip dengan berikut ini. Bucket target Anda sekarang harus berisi dua file yang diubah ukurannya. { "Contents": [ { "Key": "resized-HappyFace.jpg", "LastModified": "2023-06-07T00:15:50+00:00", "ETag": "\"7781a43e765a8301713f533d70968a1e\"", "Size": 2763, "StorageClass": "STANDARD" }, { "Key": "resized-SmileyFace.jpg", "LastModified": "2023-06-07T00:13:18+00:00", "ETag": "\"ca536e5a1b9e32b22cd549e18792cdbc\"", "Size": 1245, "StorageClass": "STANDARD" } ] } Bersihkan sumber daya Anda Sekarang Anda dapat menghapus sumber daya yang Anda buat untuk tutorial ini, kecuali Anda ingin mempertahankannya. Dengan menghapus AWS sumber daya yang tidak lagi Anda gunakan, Anda mencegah tagihan yang tidak perlu ke Anda Akun AWS. Untuk menghapus fungsi Lambda Buka halaman Fungsi di konsol Lambda. Pilih fungsi yang Anda buat. Pilih Tindakan , Hapus . Ketik confirm kolom input teks dan pilih Hapus . Untuk menghapus kebijakan yang Anda buat. Buka halaman Kebijakan konsol IAM. Pilih kebijakan yang Anda buat ( AWSLambdaS3Policy ). Pilih Tindakan kebijakan , Hapus . Pilih Hapus . Untuk menghapus peran eksekusi Buka halaman Peran dari konsol IAM. Pilih peran eksekusi yang Anda buat. Pilih Hapus . Masukkan nama peran di bidang input teks dan pilih Hapus . Untuk menghapus bucket S3 Buka konsol Amazon S3 . Pilih bucket yang Anda buat. Pilih Hapus . Masukkan nama ember di bidang input teks. Pilih Hapus bucket . Javascript dinonaktifkan atau tidak tersedia di browser Anda. Untuk menggunakan Dokumentasi AWS, Javascript harus diaktifkan. Lihat halaman Bantuan browser Anda untuk petunjuk. Konvensi Dokumen Tutorial: Menggunakan pemicu S3 Secrets Manager Apakah halaman ini membantu Anda? - Ya Terima kasih telah memberitahukan bahwa hasil pekerjaan kami sudah baik. Jika Anda memiliki waktu luang, beri tahu kami aspek apa saja yang sudah bagus, agar kami dapat menerapkannya secara lebih luas. Apakah halaman ini membantu Anda? - Tidak Terima kasih telah memberi tahu kami bahwa halaman ini perlu ditingkatkan. Maaf karena telah mengecewakan Anda. Jika Anda memiliki waktu luang, beri tahu kami bagaimana dokumentasi ini dapat ditingkatkan. | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/lambda/latest/dg/with-s3-example.html#with-s3-example-deploy-code | Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Tutorial: Using an Amazon S3 trigger to invoke a Lambda function - AWS Lambda Documentation AWS Lambda Developer Guide Create an Amazon S3 bucket Upload a test object to your bucket Create a permissions policy Create an execution role Create the Lambda function Deploy the function code Create the Amazon S3 trigger Test the Lambda function Clean up your resources Next steps Tutorial: Using an Amazon S3 trigger to invoke a Lambda function In this tutorial, you use the console to create a Lambda function and configure a trigger for an Amazon Simple Storage Service (Amazon S3) bucket. Every time that you add an object to your Amazon S3 bucket, your function runs and outputs the object type to Amazon CloudWatch Logs. This tutorial demonstrates how to: Create an Amazon S3 bucket. Create a Lambda function that returns the object type of objects in an Amazon S3 bucket. Configure a Lambda trigger that invokes your function when objects are uploaded to your bucket. Test your function, first with a dummy event, and then using the trigger. By completing these steps, you’ll learn how to configure a Lambda function to run whenever objects are added to or deleted from an Amazon S3 bucket. You can complete this tutorial using only the AWS Management Console. Create an Amazon S3 bucket To create an Amazon S3 bucket Open the Amazon S3 console and select the General purpose buckets page. Select the AWS Region closest to your geographical location. You can change your region using the drop-down list at the top of the screen. Later in the tutorial, you must create your Lambda function in the same Region. Choose Create bucket . Under General configuration , do the following: For Bucket type , ensure General purpose is selected. For Bucket name , enter a globally unique name that meets the Amazon S3 Bucket naming rules . Bucket names can contain only lower case letters, numbers, dots (.), and hyphens (-). Leave all other options set to their default values and choose Create bucket . Upload a test object to your bucket To upload a test object Open the Buckets page of the Amazon S3 console and choose the bucket you created during the previous step. Choose Upload . Choose Add files and select the object that you want to upload. You can select any file (for example, HappyFace.jpg ). Choose Open , then choose Upload . Later in the tutorial, you’ll test your Lambda function using this object. Create a permissions policy Create a permissions policy that allows Lambda to get objects from an Amazon S3 bucket and to write to Amazon CloudWatch Logs. To create the policy Open the Policies page of the IAM console. Choose Create Policy . Choose the JSON tab, and then paste the following custom policy into the JSON editor. JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } Choose Next: Tags . Choose Next: Review . Under Review policy , for the policy Name , enter s3-trigger-tutorial . Choose Create policy . Create an execution role An execution role is an AWS Identity and Access Management (IAM) role that grants a Lambda function permission to access AWS services and resources. In this step, create an execution role using the permissions policy that you created in the previous step. To create an execution role and attach your custom permissions policy Open the Roles page of the IAM console. Choose Create role . For the type of trusted entity, choose AWS service , then for the use case, choose Lambda . Choose Next . In the policy search box, enter s3-trigger-tutorial . In the search results, select the policy that you created ( s3-trigger-tutorial ), and then choose Next . Under Role details , for the Role name , enter lambda-s3-trigger-role , then choose Create role . Create the Lambda function Create a Lambda function in the console using the Python 3.14 runtime. To create the Lambda function Open the Functions page of the Lambda console. Make sure you're working in the same AWS Region you created your Amazon S3 bucket in. You can change your Region using the drop-down list at the top of the screen. Choose Create function . Choose Author from scratch Under Basic information , do the following: For Function name , enter s3-trigger-tutorial For Runtime , choose Python 3.14 . For Architecture , choose x86_64 . In the Change default execution role tab, do the following: Expand the tab, then choose Use an existing role . Select the lambda-s3-trigger-role you created earlier. Choose Create function . Deploy the function code This tutorial uses the Python 3.14 runtime, but we’ve also provided example code files for other runtimes. You can select the tab in the following box to see the code for the runtime you’re interested in. The Lambda function retrieves the key name of the uploaded object and the name of the bucket from the event parameter it receives from Amazon S3. The function then uses the get_object method from the AWS SDK for Python (Boto3) to retrieve the object's metadata, including the content type (MIME type) of the uploaded object. To deploy the function code Choose the Python tab in the following box and copy the code. .NET SDK for .NET Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using .NET. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Go. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Java. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using JavaScript. import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; Consuming an S3 event with Lambda using TypeScript. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using PHP. <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Python. # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Ruby. require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust Note There's more on GitHub. Find the complete example and learn how to set up and run in the Serverless examples repository. Consuming an S3 event with Lambda using Rust. // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } In the Code source pane on the Lambda console, paste the code into the code editor, replacing the code that Lambda created. In the DEPLOY section, choose Deploy to update your function's code: Create the Amazon S3 trigger To create the Amazon S3 trigger In the Function overview pane, choose Add trigger . Select S3 . Under Bucket , select the bucket you created earlier in the tutorial. Under Event types , be sure that All object create events is selected. Under Recursive invocation , select the check box to acknowledge that using the same Amazon S3 bucket for input and output is not recommended. Choose Add . Note When you create an Amazon S3 trigger for a Lambda function using the Lambda console, Amazon S3 configures an event notification on the bucket you specify. Before configuring this event notification, Amazon S3 performs a series of checks to confirm that the event destination exists and has the required IAM policies. Amazon S3 also performs these tests on any other event notifications configured for that bucket. Because of this check, if the bucket has previously configured event destinations for resources that no longer exist, or for resources that don't have the required permissions policies, Amazon S3 won't be able to create the new event notification. You'll see the following error message indicating that your trigger couldn't be created: An error occurred when creating the trigger: Unable to validate the following destination configurations. You can see this error if you previously configured a trigger for another Lambda function using the same bucket, and you have since deleted the function or modified its permissions policies. Test your Lambda function with a dummy event To test the Lambda function with a dummy event In the Lambda console page for your function, choose the Test tab. For Event name , enter MyTestEvent . In the Event JSON , paste the following test event. Be sure to replace these values: Replace us-east-1 with the region you created your Amazon S3 bucket in. Replace both instances of amzn-s3-demo-bucket with the name of your own Amazon S3 bucket. Replace test%2FKey with the name of the test object you uploaded to your bucket earlier (for example, HappyFace.jpg ). { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } Choose Save . Choose Test . If your function runs successfully, you’ll see output similar to the following in the Execution results tab. Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Test the Lambda function with the Amazon S3 trigger To test your function with the configured trigger, upload an object to your Amazon S3 bucket using the console. To verify that your Lambda function ran as expected, use CloudWatch Logs to view your function’s output. To upload an object to your Amazon S3 bucket Open the Buckets page of the Amazon S3 console and choose the bucket that you created earlier. Choose Upload . Choose Add files and use the file selector to choose an object you want to upload. This object can be any file you choose. Choose Open , then choose Upload . To verify the function invocation using CloudWatch Logs Open the CloudWatch console. Make sure you're working in the same AWS Region you created your Lambda function in. You can change your Region using the drop-down list at the top of the screen. Choose Logs , then choose Log groups . Choose the log group for your function ( /aws/lambda/s3-trigger-tutorial ). Under Log streams , choose the most recent log stream. If your function was invoked correctly in response to your Amazon S3 trigger, you’ll see output similar to the following. The CONTENT TYPE you see depends on the type of file you uploaded to your bucket. 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg Clean up your resources You can now delete the resources that you created for this tutorial, unless you want to retain them. By deleting AWS resources that you're no longer using, you prevent unnecessary charges to your AWS account. To delete the Lambda function Open the Functions page of the Lambda console. Select the function that you created. Choose Actions , Delete . Type confirm in the text input field and choose Delete . To delete the execution role Open the Roles page of the IAM console. Select the execution role that you created. Choose Delete . Enter the name of the role in the text input field and choose Delete . To delete the S3 bucket Open the Amazon S3 console. Select the bucket you created. Choose Delete . Enter the name of the bucket in the text input field. Choose Delete bucket . Next steps In Tutorial: Using an Amazon S3 trigger to create thumbnail images , the Amazon S3 trigger invokes a function that creates a thumbnail image for each image file that is uploaded to a bucket. This tutorial requires a moderate level of AWS and Lambda domain knowledge. It demonstrates how to create resources using the AWS Command Line Interface (AWS CLI) and how to create a .zip file archive deployment package for the function and its dependencies. Javascript is disabled or is unavailable in your browser. To use the Amazon Web Services Documentation, Javascript must be enabled. Please refer to your browser's Help pages for instructions. Document Conventions S3 Tutorial: Use an Amazon S3 trigger to create thumbnails Did this page help you? - Yes Thanks for letting us know we're doing a good job! If you've got a moment, please tell us what we did right so we can do more of it. Did this page help you? - No Thanks for letting us know this page needs work. We're sorry we let you down. If you've got a moment, please tell us how we can make the documentation better. | 2026-01-13T09:30:35 |
https://young-programmers.blogspot.com/search/label/Swing#main | Young Programmers Podcast: Swing skip to main | skip to sidebar Young Programmers Podcast A video podcast for computer programmers in grades 3 and up. We learn about Scratch, Tynker, Alice, Python, Pygame, and Scala, and interview interesting programmers. From professional software developer and teacher Dave Briccetti, and many special guests. Viewing the Videos or Subscribing to the Podcast Some of the entries have a picture, which you can click to access the video. Otherwise, to see the videos, use this icon to subscribe to or view the feed: Or, subscribe in iTunes Showing posts with label Swing . Show all posts Showing posts with label Swing . Show all posts Thursday, December 24, 2009 Jython/Swing Game of Life Version 2 A very quick look at version 2 of our Jython implementation of Conway’s Game of Life. Source code at 12:25 AM Labels: Jython , python , Swing Saturday, September 19, 2009 Jython/Swing Conway’s Game of Life A brief overview of a Jython (Python) implementation of Conway’s Game of Life using Java's Swing for the GUI. Source code : http://davebsoft.com/cfkfiles/python/Life/. at 4:33 PM Labels: Java , Jython , python , Swing Friday, September 18, 2009 A Graphical User Interface with Jython and Swing Dave Briccetti shows how to create a simple Python program with a graphical user interface using Jython (Python running on Java) and Swing. at 4:59 PM Labels: Java , Jython , python , Swing Older Posts Home Subscribe to: Comments (Atom) About Me Dave Briccetti View my complete profile Where to Get Software Kojo Python Alice Scratch Other Blogs Dave Briccetti’s Blog One of My Best Classes Ever 10 years ago Tags alice (3) Android (1) arduino (1) art (1) audacity (2) dictionary (2) Flickr (1) functions (2) gamedev (1) garageband (1) GIMP (2) Google (2) guest (4) hacker (1) higher-order functions (1) inkscape (1) interview (9) Java (2) JavaFX (2) Jython (3) Kojo (2) lift (1) music (2) physics (1) platform (1) programmer (4) pygame (6) python (31) PythonCard (1) random (6) Sande (2) Scala (5) scratch (10) shdh (2) shdh34 (2) sound (3) sprite (2) Swing (3) teaching (3) twitter (2) Tynker (1) Web Services (1) xturtle (1) Followers Blog Archive ▼  2015 (1) ▼  February (1) This Podcast Moves to YouTube ►  2013 (4) ►  July (1) ►  June (3) ►  2012 (2) ►  February (1) ►  January (1) ►  2011 (8) ►  November (1) ►  July (3) ►  May (1) ►  February (2) ►  January (1) ►  2010 (6) ►  October (2) ►  June (2) ►  February (2) ►  2009 (37) ►  December (4) ►  November (1) ►  September (7) ►  August (11) ►  July (14)   | 2026-01-13T09:30:35 |
https://docs.aws.amazon.com/ja_jp/lambda/latest/dg/with-s3-example.html | チュートリアル: Amazon S3 トリガーを使用して Lambda 関数を呼び出す - AWS Lambda チュートリアル: Amazon S3 トリガーを使用して Lambda 関数を呼び出す - AWS Lambda ドキュメント AWS Lambda デベロッパーガイド Amazon S3 バケットを作成する テストオブジェクトをバケットにアップロードする 許可ポリシーを作成する 実行ロールを作成する Lambda 関数を作成する 関数コードをデプロイする Amazon S3 トリガーを作成する Lambda 関数をテストする リソースのクリーンアップ 次のステップ チュートリアル: Amazon S3 トリガーを使用して Lambda 関数を呼び出す このチュートリアルでは、コンソールを使用して Lambda 関数を作成し、Amazon Simple Storage Service (Amazon S3) バケットのトリガーを設定します。Amazon S3 バケットにオブジェクトを追加するたびに関数を実行し、Amazon CloudWatch Logs にオブジェクトタイプを出力します。 このチュートリアルでは、次の方法を示します。 Amazon S3 バケットを作成する。 Amazon S3 バケット内のオブジェクトのオブジェクトタイプを返す Lambda 関数を作成します。 オブジェクトがバケットにアップロードされたときに関数を呼び出す Lambda トリガーを設定します。 最初にダミーイベントを使用して関数をテストし、次にトリガーを使用してテストします。 これらのステップを完了することにより、Amazon S3 バケットにオブジェクトが追加されたり、Amazon S3 バケットから削除されたりするたびに実行されるように Lambda 関数を設定する方法を学びます。AWS マネジメントコンソール のみを使って、このチュートリアルを完了できます。 Amazon S3 バケットを作成する Amazon S3 バケットを作成するには [Amazon S3 コンソール] を開き、 [汎用バケット] ページを選択します。 住まいの地域に最も近い AWS リージョン を選択してください。画面上部にあるドロップダウンリストを使用して、リージョンを変更できます。チュートリアルの後半では、同じリージョンで Lambda 関数を作成する必要があります。 [バケットを作成する] を選択します。 [全般設定] で、次の操作を行います。 [バケットタイプ] で、 [汎用] が選択されていることを確認してください。 [バケット名] には、Amazon S3 バケットの命名規則 を満たすグローバルに一意な名前を入力します。バケット名は、小文字、数字、ドット (.)、およびハイフン (-) のみで構成できます。 他のすべてのオプションはデフォルト設定値のままにしておき、 [バケットの作成] を選択します。 テストオブジェクトをバケットにアップロードする テストオブジェクトをアップロードするには Amazon S3 コンソールの バケット ページを開き、前のステップで作成したバケットを選択します。 [アップロード] を選択します。 [ファイルを追加] を選択し、アップロードするファイルを選択します。任意のファイルを選択できます (例えば、 HappyFace.jpg )。 [開く] 、 [アップロード] の順に選択します。 チュートリアルの後半では、このオブジェクトを使用して Lambda 関数をテストします。 許可ポリシーを作成する Lambda が Amazon S3 バケットからオブジェクトを取得し、Amazon CloudWatch Logs に書き込めるようにする許可ポリシーを作成します。 ポリシーを作成するには IAM コンソールの ポリシー ページを開きます。 [ポリシーの作成] を選択します。 [JSON] タブを選択して、次のカスタムポリシーを JSON エディタに貼り付けます。 JSON { "Version":"2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:PutLogEvents", "logs:CreateLogGroup", "logs:CreateLogStream" ], "Resource": "arn:aws:logs:*:*:*" }, { "Effect": "Allow", "Action": [ "s3:GetObject" ], "Resource": "arn:aws:s3:::*/*" } ] } [次へ: タグ] を選択します。 [次へ: レビュー] を選択します。 [ポリシーの確認] でポリシーの [名前] に「 s3-trigger-tutorial 」と入力します。 [ポリシーの作成] を選択します。 実行ロールを作成する 実行ロール とは、AWS のサービス およびリソースに対するアクセス許可を Lambda 関数に付与する AWS Identity and Access Management (IAM) のロールです。この手順では、前のステップで作成したアクセス権限ポリシーを使用して実行ロールを作成します。 実行ロールを作成して、カスタム許可ポリシーをアタッチするには IAM コンソールの ロールページ を開きます。 [ロールの作成] を選択します。 信頼されたエンティティには、 [AWS サービス] を選択し、ユースケースには [Lambda] を選択します。 [次へ] をクリックします。 ポリシー検索ボックスに、「 s3-trigger-tutorial 」と入力します。 検索結果で作成したポリシー ( s3-trigger-tutorial ) を選択し、 [次へ] を選択します。 [ロールの詳細] で [ロール名] に lambda-s3-trigger-role を入力してから、 [ロールの作成] を選択します。 Lambda 関数を作成する Python 3.13 ランタイムを使用してコンソールで Lambda 関数を作成します。 Lambda 関数を作成するには Lambda コンソールの 関数 ページを開きます。 Amazon S3 バケットを作成したときと同じ AWS リージョン で操作していることを確認してください。画面上部にあるドロップダウンリストを使用して、リージョンを変更できます。 [関数の作成] を選択します。 [一から作成] を選択します。 [基本的な情報] で、以下を実行します。 [関数名] に s3-trigger-tutorial と入力します。 [ランタイム] で [Python 3.13] を選択します。 [アーキテクチャ] で [x86_64] を選択します。 [デフォルトの実行ロールの変更] タブで、次の操作を行います。 タブを展開し、 [既存のロールを使用する] を選択します。 先ほど作成した lambda-s3-trigger-role を選択します。 [ 関数の作成 ] を選択してください。 関数コードをデプロイする このチュートリアルでは Python 3.13 ランタイムを使用しますが、他のランタイム用のコードファイルの例も用意しています。次のボックスでタブを選択すると、関心のあるランタイムのコードが表示されます。 Lambda 関数は、Amazon S3 から受信する event パラメータから、アップロードされたオブジェクトのキー名およびバケットの名前を取得します。次に、関数は AWS SDK for Python (Boto3) から「 get_object 」メソッドを使用し、アップロードされたオブジェクトのコンテンツタイプ (MIME タイプ) を含むオブジェクトのメタデータを取得します。 関数コードをデプロイするには 次のボックスで [Python] タブを選択し、コードをコピーします。 .NET SDK for .NET 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 .NET を使用して Lambda で S3 イベントを消費します。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 using System.Threading.Tasks; using Amazon.Lambda.Core; using Amazon.S3; using System; using Amazon.Lambda.S3Events; using System.Web; // Assembly attribute to enable the Lambda function's JSON input to be converted into a .NET class. [assembly: LambdaSerializer(typeof(Amazon.Lambda.Serialization.SystemTextJson.DefaultLambdaJsonSerializer))] namespace S3Integration { public class Function { private static AmazonS3Client _s3Client; public Function() : this(null) { } internal Function(AmazonS3Client s3Client) { _s3Client = s3Client ?? new AmazonS3Client(); } public async Task<string> Handler(S3Event evt, ILambdaContext context) { try { if (evt.Records.Count <= 0) { context.Logger.LogLine("Empty S3 Event received"); return string.Empty; } var bucket = evt.Records[0].S3.Bucket.Name; var key = HttpUtility.UrlDecode(evt.Records[0].S3.Object.Key); context.Logger.LogLine($"Request is for { bucket} and { key}"); var objectResult = await _s3Client.GetObjectAsync(bucket, key); context.Logger.LogLine($"Returning { objectResult.Key}"); return objectResult.Key; } catch (Exception e) { context.Logger.LogLine($"Error processing request - { e.Message}"); return string.Empty; } } } } Go SDK for Go V2 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 Go を使用して Lambda で S3 イベントを消費します。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package main import ( "context" "log" "github.com/aws/aws-lambda-go/events" "github.com/aws/aws-lambda-go/lambda" "github.com/aws/aws-sdk-go-v2/config" "github.com/aws/aws-sdk-go-v2/service/s3" ) func handler(ctx context.Context, s3Event events.S3Event) error { sdkConfig, err := config.LoadDefaultConfig(ctx) if err != nil { log.Printf("failed to load default config: %s", err) return err } s3Client := s3.NewFromConfig(sdkConfig) for _, record := range s3Event.Records { bucket := record.S3.Bucket.Name key := record.S3.Object.URLDecodedKey headOutput, err := s3Client.HeadObject(ctx, &s3.HeadObjectInput { Bucket: &bucket, Key: &key, }) if err != nil { log.Printf("error getting head of object %s/%s: %s", bucket, key, err) return err } log.Printf("successfully retrieved %s/%s of type %s", bucket, key, *headOutput.ContentType) } return nil } func main() { lambda.Start(handler) } Java SDK for Java 2.x 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 Java を使用して Lambda で S3 イベントを消費します。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 package example; import software.amazon.awssdk.services.s3.model.HeadObjectRequest; import software.amazon.awssdk.services.s3.model.HeadObjectResponse; import software.amazon.awssdk.services.s3.S3Client; import com.amazonaws.services.lambda.runtime.Context; import com.amazonaws.services.lambda.runtime.RequestHandler; import com.amazonaws.services.lambda.runtime.events.S3Event; import com.amazonaws.services.lambda.runtime.events.models.s3.S3EventNotification.S3EventNotificationRecord; import org.slf4j.Logger; import org.slf4j.LoggerFactory; public class Handler implements RequestHandler<S3Event, String> { private static final Logger logger = LoggerFactory.getLogger(Handler.class); @Override public String handleRequest(S3Event s3event, Context context) { try { S3EventNotificationRecord record = s3event.getRecords().get(0); String srcBucket = record.getS3().getBucket().getName(); String srcKey = record.getS3().getObject().getUrlDecodedKey(); S3Client s3Client = S3Client.builder().build(); HeadObjectResponse headObject = getHeadObject(s3Client, srcBucket, srcKey); logger.info("Successfully retrieved " + srcBucket + "/" + srcKey + " of type " + headObject.contentType()); return "Ok"; } catch (Exception e) { throw new RuntimeException(e); } } private HeadObjectResponse getHeadObject(S3Client s3Client, String bucket, String key) { HeadObjectRequest headObjectRequest = HeadObjectRequest.builder() .bucket(bucket) .key(key) .build(); return s3Client.headObject(headObjectRequest); } } JavaScript SDK for JavaScript (v3) 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 JavaScript を使用して Lambda で S3 イベントを消費します。 import { S3Client, HeadObjectCommand } from "@aws-sdk/client-s3"; const client = new S3Client(); export const handler = async (event, context) => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); try { const { ContentType } = await client.send(new HeadObjectCommand( { Bucket: bucket, Key: key, })); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; TypeScript を使用して Lambda で S3 イベントを消費する。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 import { S3Event } from 'aws-lambda'; import { S3Client, HeadObjectCommand } from '@aws-sdk/client-s3'; const s3 = new S3Client( { region: process.env.AWS_REGION }); export const handler = async (event: S3Event): Promise<string | undefined> => { // Get the object from the event and show its content type const bucket = event.Records[0].s3.bucket.name; const key = decodeURIComponent(event.Records[0].s3.object.key.replace(/\+/g, ' ')); const params = { Bucket: bucket, Key: key, }; try { const { ContentType } = await s3.send(new HeadObjectCommand(params)); console.log('CONTENT TYPE:', ContentType); return ContentType; } catch (err) { console.log(err); const message = `Error getting object $ { key} from bucket $ { bucket}. Make sure they exist and your bucket is in the same region as this function.`; console.log(message); throw new Error(message); } }; PHP SDK for PHP 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 PHP を使用して Lambda で S3 イベントの消費。 <?php use Bref\Context\Context; use Bref\Event\S3\S3Event; use Bref\Event\S3\S3Handler; use Bref\Logger\StderrLogger; require __DIR__ . '/vendor/autoload.php'; class Handler extends S3Handler { private StderrLogger $logger; public function __construct(StderrLogger $logger) { $this->logger = $logger; } public function handleS3(S3Event $event, Context $context) : void { $this->logger->info("Processing S3 records"); // Get the object from the event and show its content type $records = $event->getRecords(); foreach ($records as $record) { $bucket = $record->getBucket()->getName(); $key = urldecode($record->getObject()->getKey()); try { $fileSize = urldecode($record->getObject()->getSize()); echo "File Size: " . $fileSize . "\n"; // TODO: Implement your custom processing logic here } catch (Exception $e) { echo $e->getMessage() . "\n"; echo 'Error getting object ' . $key . ' from bucket ' . $bucket . '. Make sure they exist and your bucket is in the same region as this function.' . "\n"; throw $e; } } } } $logger = new StderrLogger(); return new Handler($logger); Python SDK for Python (Boto3) 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 Python を使用して Lambda で S3 イベントを消費します。 # Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. # SPDX-License-Identifier: Apache-2.0 import json import urllib.parse import boto3 print('Loading function') s3 = boto3.client('s3') def lambda_handler(event, context): #print("Received event: " + json.dumps(event, indent=2)) # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = urllib.parse.unquote_plus(event['Records'][0]['s3']['object']['key'], encoding='utf-8') try: response = s3.get_object(Bucket=bucket, Key=key) print("CONTENT TYPE: " + response['ContentType']) return response['ContentType'] except Exception as e: print(e) print('Error getting object { } from bucket { }. Make sure they exist and your bucket is in the same region as this function.'.format(key, bucket)) raise e Ruby SDK for Ruby 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 Ruby を使用して Lambda での S3 イベントの消費。 require 'json' require 'uri' require 'aws-sdk' puts 'Loading function' def lambda_handler(event:, context:) s3 = Aws::S3::Client.new(region: 'region') # Your AWS region # puts "Received event: # { JSON.dump(event)}" # Get the object from the event and show its content type bucket = event['Records'][0]['s3']['bucket']['name'] key = URI.decode_www_form_component(event['Records'][0]['s3']['object']['key'], Encoding::UTF_8) begin response = s3.get_object(bucket: bucket, key: key) puts "CONTENT TYPE: # { response.content_type}" return response.content_type rescue StandardError => e puts e.message puts "Error getting object # { key} from bucket # { bucket}. Make sure they exist and your bucket is in the same region as this function." raise e end end Rust SDK for Rust 注記 GitHub には、その他のリソースもあります。 サーバーレスサンプル リポジトリで完全な例を検索し、設定および実行の方法を確認してください。 Rust を使用して Lambda で S3 イベントを消費します。 // Copyright Amazon.com, Inc. or its affiliates. All Rights Reserved. // SPDX-License-Identifier: Apache-2.0 use aws_lambda_events::event::s3::S3Event; use aws_sdk_s3:: { Client}; use lambda_runtime:: { run, service_fn, Error, LambdaEvent}; /// Main function #[tokio::main] async fn main() -> Result<(), Error> { tracing_subscriber::fmt() .with_max_level(tracing::Level::INFO) .with_target(false) .without_time() .init(); // Initialize the AWS SDK for Rust let config = aws_config::load_from_env().await; let s3_client = Client::new(&config); let res = run(service_fn(|request: LambdaEvent<S3Event>| { function_handler(&s3_client, request) })).await; res } async fn function_handler( s3_client: &Client, evt: LambdaEvent<S3Event> ) -> Result<(), Error> { tracing::info!(records = ?evt.payload.records.len(), "Received request from SQS"); if evt.payload.records.len() == 0 { tracing::info!("Empty S3 event received"); } let bucket = evt.payload.records[0].s3.bucket.name.as_ref().expect("Bucket name to exist"); let key = evt.payload.records[0].s3.object.key.as_ref().expect("Object key to exist"); tracing::info!("Request is for { } and object { }", bucket, key); let s3_get_object_result = s3_client .get_object() .bucket(bucket) .key(key) .send() .await; match s3_get_object_result { Ok(_) => tracing::info!("S3 Get Object success, the s3GetObjectResult contains a 'body' property of type ByteStream"), Err(_) => tracing::info!("Failure with S3 Get Object request") } Ok(()) } Lambda コンソールの [コードソース] ペインで、コードをコードエディタに貼り付け、Lambda が作成したコードを置き換えます。 [DEPLOY] セクションで、 [デプロイ] を選択して関数のコードを更新します。 Amazon S3 トリガーを作成する Amazon S3 トリガーを作成するには [関数の概要] ペインで、 [トリガーを追加] を選択します。 [S3] を選択します。 [バケット] で、前のチュートリアルで作成したバケットを選択します。 [イベントタイプ] で、 [すべてのオブジェクトの作成イベント] を選択します。 [再帰呼び出し] でチェックボックスを選択して、入力と出力に同じ Amazon S3 バケットを使用することは推奨されないことを確認します。 [Add] (追加) を選択します。 注記 Lambda コンソールを使用して Lambda 関数の Amazon S3 トリガーを作成すると、Amazon S3 は指定したバケットに対して イベント通知 を設定します。このイベント通知を設定する前に、Amazon S3 は一連のチェックを実行して、イベントの送信先が存在し、必要な IAM ポリシーがあることを確認します。また、Amazon S3 は、そのバケットに設定されている他のイベント通知に対してもこれらのテストを実行します。 このチェックが行われるために、既に存在しないリソースや必要なアクセス許可ポリシーを持たないリソースのイベントの送信先がバケットで既に設定されている場合、Amazon S3 は新しいイベント通知を作成できません。トリガーを作成できなかったことを示す次のエラーメッセージが表示されます。 An error occurred when creating the trigger: Unable to validate the following destination configurations. このエラーは、以前に同じバケットを使用して別の Lambda 関数のトリガーを設定しており、その後に関数を削除したり、そのアクセス許可ポリシーを変更したりした場合に表示されます。 Lambda 関数をダミーイベントでテストする Lambda 関数をダミーイベントでテストするには 関数の Lambda コンソールページで、 [テスト] タブを選択します。 イベント名 ()で、 MyTestEvent と入力します。 [イベント JSON] で、次のテストイベントを貼り付けます。次の値を必ず置き換えてください。 us-east-1 を Amazon S3 バケットを作成したリージョンに置き換えます。 amzn-s3-demo-bucket の両方のインスタンスをお使いの Amazon S3 バケットの名前に置き換えます。 test%2FKey を前にバケットにアップロードしたテストオブジェクトの名前 (例えば、 HappyFace.jpg ) に置き換えます。 { "Records": [ { "eventVersion": "2.0", "eventSource": "aws:s3", "awsRegion": " us-east-1 ", "eventTime": "1970-01-01T00:00:00.000Z", "eventName": "ObjectCreated:Put", "userIdentity": { "principalId": "EXAMPLE" }, "requestParameters": { "sourceIPAddress": "127.0.0.1" }, "responseElements": { "x-amz-request-id": "EXAMPLE123456789", "x-amz-id-2": "EXAMPLE123/5678abcdefghijklambdaisawesome/mnopqrstuvwxyzABCDEFGH" }, "s3": { "s3SchemaVersion": "1.0", "configurationId": "testConfigRule", "bucket": { "name": " amzn-s3-demo-bucket ", "ownerIdentity": { "principalId": "EXAMPLE" }, "arn": "arn:aws:s3::: amzn-s3-demo-bucket " }, "object": { "key": " test%2Fkey ", "size": 1024, "eTag": "0123456789abcdef0123456789abcdef", "sequencer": "0A1B2C3D4E5F678901" } } } ] } [保存] を選択します。 [ Test ] を選択します。 関数が正常に実行されると、 [実行結果] タブに次のような出力が表示されます。 Response "image/jpeg" Function Logs START RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Version: $LATEST 2021-02-18T21:40:59.280Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO INPUT BUCKET AND KEY: { Bucket: 'amzn-s3-demo-bucket', Key: 'HappyFace.jpg' } 2021-02-18T21:41:00.215Z 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 INFO CONTENT TYPE: image/jpeg END RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 REPORT RequestId: 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Duration: 976.25 ms Billed Duration: 977 ms Memory Size: 128 MB Max Memory Used: 90 MB Init Duration: 430.47 ms Request ID 12b3cae7-5f4e-415e-93e6-416b8f8b66e6 Amazon S3 トリガーを使用して Lambda 関数をテストする 設定したトリガーで関数をテストするには、コンソールを使用して Amazon S3 バケットにオブジェクトをアップロードします。Lambda 関数が予想通りに実行されたことを確認するには、CloudWatch Logs を使用して関数の出力を確認します。 オブジェクトを Amazon S3 バケットにアップロードするには Amazon S3 コンソールの「 バケット 」ページを開き、先ほど作成したバケットを選択します。 [アップロード] を選択します。 [ファイルを追加] を選択し、ファイルセレクターを使用してアップロードするオブジェクトを選択します。このオブジェクトには任意のファイルを選択できます。 [開く] 、 [アップロード] の順に選択します。 CloudWatch Logs を使用して関数の呼び出しを確認する方法 CloudWatch コンソールを開きます。 Lambda 関数を作成したところと同じ AWS リージョン で操作していることを確認してください。画面上部にあるドロップダウンリストを使用して、リージョンを変更できます。 [ログ] 、 [ロググループ] の順に選択します。 関数のロググループの名前を選択します ( /aws/lambda/s3-trigger-tutorial ) 。 [ログストリーム] から、最新のログストリームを選択します。 Amazon S3 トリガーに応答して関数が正しく呼び出される場合、次のような内容と同じような出力が表示されます。表示される CONTENT TYPE は、バケットにアップロードしたファイルのタイプによって異なります。 2022-05-09T23:17:28.702Z 0cae7f5a-b0af-4c73-8563-a3430333cc10 INFO CONTENT TYPE: image/jpeg リソースのクリーンアップ このチュートリアル用に作成したリソースは、保持しない場合は削除できます。使用しなくなった AWS リソースを削除することで、AWS アカウント アカウントに請求される料金の発生を防ぎます。 Lambda 関数を削除するには Lambda コンソールの [関数] ページを開きます。 作成した関数を選択します。 [アクション] で、 [削除] を選択します。 テキスト入力フィールドに confirm と入力し、 [削除] を選択します。 実行ロールを削除する IAM コンソールの [ロール] ページを開きます。 作成した実行ロールを選択します。 [削除] を選択します。 テキスト入力フィールドにロールの名前を入力し、 [削除] を選択します。 S3 バケットを削除するには Amazon S3 コンソール を開きます。 作成したバケットを選択します。 [ 削除 ] を選択します。 テキスト入力フィールドにバケットの名前を入力します。 [バケットを削除] を選択します。 次のステップ チュートリアル: Amazon S3 トリガーを使用してサムネイル画像を作成する では、Amazon S3 トリガーが関数を呼び出します。この感想は、バケットにアップロードされる各イメージファイルにサムネイルイメージを作成します。このチュートリアルでは、AWS と Lambda ドメインに関する中級レベルの知識が必要です。AWS Command Line Interface (AWS CLI) を使用してリソースを作成し、関数およびその依存関係に .zip ファイルアーカイブのデプロイパッケージを作成する方法を示します。 ブラウザで JavaScript が無効になっているか、使用できません。 AWS ドキュメントを使用するには、JavaScript を有効にする必要があります。手順については、使用するブラウザのヘルプページを参照してください。 ドキュメントの表記規則 S3 チュートリアル: Amazon S3 トリガーを使用してサムネイルを作成する このページは役に立ちましたか? - はい ページが役に立ったことをお知らせいただき、ありがとうございます。 お時間がある場合は、何が良かったかお知らせください。今後の参考にさせていただきます。 このページは役に立ちましたか? - いいえ このページは修正が必要なことをお知らせいただき、ありがとうございます。ご期待に沿うことができず申し訳ありません。 お時間がある場合は、ドキュメントを改善する方法についてお知らせください。 | 2026-01-13T09:30:35 |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.