Document
stringlengths
395
24.5k
Source
stringclasses
6 values
Similarly, What is M in text files? It’s what’s known as a carriage return. Unix editors prefer to utilize just the line feed, but DOS/Windows editors prefer to use both a carriage return and a line feed. Some editors, such as geany and textpad, can identify it and hide the M, while others will allow you make a save as with options like unix style or crlf style. Also, it is asked, What is M in Unix? When sending text from a Windows computer to a Linux or Unix workstation, control M (M) characters are used. When you directly transfer a file from a Windows system or submit form data copied and pasted from a Windows workstation, these are the most prevalent causes. Secondly, What is M in bash? This post’s activity should be visible. A carriage return (M) is a character that appears often when files are transferred from Windows. Use the command od -xc filename. That should give you a high-level overview of your file. Another possibility is that your terminal settings are not translating appropriately if your file does not originate from Windows. Also, What is the control M character? Carriage return is the term for it. If you’re using vim, use CTRL – v CTRL – m to enter insert mode. The letter M is the keyboard equivalent of the letter r. People also ask, How do I remove M from a text file? In UNIX, remove CTRL-M characters from a file. To delete the M characters, the most straightforward method is to utilize the stream editor sed. Type percent sed -e “s/M/” filename > newfilename in the command prompt. It’s also possible to accomplish it with vi: percent vi filename. Type: percent s/M/g within vi [in ESC mode]. It’s also possible to do it with Emacs. Related Questions and Answers How do I remove M from Windows? Find r (which is M) and replace it with nothing (unless you want a newline, then replace with ). Because you don’t want all of the Ms to be changed, don’t choose Replace All; instead, select the ones you wish to replace. The M’s will appear as boxes if you use ordinary Windows notepad. What is M in git? Thank you, Frank. A “Carriage Return,” or CR, is represented by the letter M. A single “line feed,” LF, terminates a line in Linux/Unix/Mac OS X. At the end of a line, Windows normally uses CRLF. “git diff” detects the end of a line using the LF, leaving the CR alone. How do I find M in Linux? Remember how to type control M characters in UNIX by holding down the control key and pressing v and m at the same time. What is M in vim? Vim shows 0xD as M (0x0D = 13, M is the thirteenth letter in the English alphabet). By using the command percent s/M/g, you can get rid of all the M characters. What does M mean in command line? Conclusion. At its most basic level, the -m switch allows you to run Python programs from the command line by utilizing modulenames rather than filenames. What is the M character in Linux? When viewing the certificate files under Linux, each line has M characters tacked to it. The problematic file was produced on Windows and then transferred to Linux. ^ In vim, M is the keyboard equivalent of r or CTRL-v + CTRL-m. How do I find M in vi? Vi doesn’t have a -b option, but it doesn’t need one; it will display M by default.: percent s stands for search replace; we’re replacing M with nothing. You must enter Control-V Control-M in that order; you may keep Control pressed all the while or release it and press it again; both methods work. How do I find Control M characters in a file? Use find in conjunction with sed. The M in the sed parameter is a Control-M, not an ASCII-followed M. To enter it, use Control – v and then M. In a sed expression, you may use any characters. How do you type M in vim? If you’re using vim, use CTRL – v CTRL – m to enter insert mode. The letter M is the keyboard equivalent of the letter r. The work may be completed by typing 0x0D into a hex editor. What does dos2unix command do? The dos2unix program is the easiest method to convert line breaks in a text file. The program transforms a file but does not save it in its original format. How do I add M in Linux? Ctrl-V x## in Vim’s insert mode, where ## is the character’s hex code. The combination for Ctrl-M is Ctrl-V x0d. If you type:as, you can see the hex value of a character beneath the cursor. How do I add Ctrl-M characters to UNIX? Enter insert mode and hit CTRL-V CTRL-M while editing the file with vi. Go to the spot where you want to insert (usually end-of-line), enter insert mode, and press CTRL-V CTRL-M while editing the file with vi. WHY is it the case? On Unix, you may use vi to insert them. Insert mode should be selected. How do I get rid of M in Perl? How do I delete M from a file in Perl? M stands for carriage return in Solution 1. Solution 2: Alternatively, perl -p -i -e’s/rn$/n/g’ file1.txt file2.txt. Solution 3: You discovered that you can also: tr/015/d; $line=tr/015/d; $line=tr/015/d; $line=tr/0 Option 4: Option 5: Option 6: Option 7: Option 8: Option 9: Option How do you replace M with newline in UNIX? If the carriage return is still missing (i.e. the M has been deleted but everything is wrapped on a single line), type percent s/Ctrl-V>Ctrl-M>/r/g (where r replaces the M with a newline). How do I get rid of Emacs M? To do so, press Control + q, then Control + m, then Enter at the prompt for what to replace. Simply press Enter when asked what to replace it with (replace it with nothing) How do I remove carriage returns in Windows? 2 Responses Replace Dialog should now be open. [|nr]R, [|nr]R, [|nr]R, [|nr]R, [|nr]R, Replace it with: 1. Verify the regular expression. Select Replace or Replace All from the drop-down menu. What is M in VS code? Modified (M) (An existing file has been changed) Deleted (D) (a file has been deleted) U stands for “untracked” (The file is new or has been changed but has not been added to the repository yet) C stands for conflict (There is a conflict in the file) R stands for renamed (The file has been renamed) What is M in git checkout? This is the result of git status; git is telling you that your working copy still has uncommitted changes after checking out master (one modified file and one deleted file). Check git-status using man git-status: M stands for modified. A is for addition. D stands for deleted. R stands for renamed. C stands for copied. U indicates that the data has been updated but has not yet been merged. What does git branch M do? According to the documentation page on git branch, -M is a flag (shortcut) for —move —force. It renames the branch main (since the default branch name for repositories established through the command line is master, but those created via GitHub [beginning in October] have the name main). What does carriage return do? CR = Carriage Return (r, 0x0D in hexadecimal, 13 in decimal) — returns the cursor to the start of the line without moving on to the next. How do I remove special characters from Notepad ++? You may then move through the text to each non-ASCII character by going to the menu Search Find characters in range Non-ASCII Characters (128-255) in Notepad++ Regular Expression is selected as a replacement option. Put this in: [x20-x7E]+Replace With Empty [x20-x7E]+Replace With Empty [x20-x7E]+Replace With Empty [x Press the Replace All button. What does CR LF stand for? Carriage Return (ASCII 13, r) Line Feed is the abbreviation for Carriage Return (ASCII 13, r) Line Feed (ASCII 10, ). They’re used to indicate a line’s end, although they’re handled differently in today’s major operating systems. How do I show special characters in vi? Replacing and Searching With vivi, you may locate your position in a file by searching for a certain string of characters. A character string is a sequence of one or more characters. To locate a character string, type / and then the string you’re looking for, then hit Return. What is M option in Python? If you wish to launch a Python module, for example, use the command python -m module-name>. The -m option looks for the module name in sys.path and executes it as __main__: $ python3 -m hello Hello, Universe! It’s important to note that module-name must be the name of a module object, not a string. How do I find control characters in Unix? To search for any kind of control character Both grep and sed can look for lines that include any character that isn’t a ‘printable’ (graphic or space) ASCII character. M&A stands for Mergers and Acquisitions, which is a process in which one company buys another. The term M&A was coined by Alfred Chandler in the 1970s. This process can be used to improve efficiency and profitability of companies. This Video Should Help: The “what is ^m in git diff” is a question that can be answered with the phrase “merger and acquisition.” M&A, or merger and acquisition, is a process of acquiring one company by another. - what is ^m in vim - what is ^m in linux - what is mi - how to remove ^m in linux - how to remove ^m in vim
OPCFW_CODE
Posted on 1st Nov 2018 in Delphi VCL There has been a trend for cloud based applications to show the Delete and Edit buttons in the grid. So a quick search on the net revealed that very little information on how to create a Styled Working Button in a Standard VCL DBGrid was to be found. So here is a development prototype that could be implemented in your application. So how should this Styled Button appear and function in the DBGrid:- Take a look at how the buttons are seamlessly integrated in the DBGrid below. Then read on how to quickly implement this in to your DBGrid. This has been created with Delph 10.2 and it is backwards compatible to XE3. You will need to reload the specific styles and set the clientdataset to re-open the client.cds file. In this example, the client.cds file example is used as the dataset and a clientdataset is dropped onto the form. Right click the clientdataset component and click on load from MyBase to select a CDS or XML db. We a re mimicking a DB by using a client.cds database. Once selected, the data is live. Add a datasource and a DBGrid. Link the Datasource1 to the Clientdataset and set the DBGrid1 ‘s Datasource to the datasource1 component to show the data in the DBgrid1. As the button will not be connecting to a Datasource it will require an inactive data column to be added. (No connected data Field). Double Click on the DBGrid1 to show the Editing DBGrid1 .columns. Populate the fields with the AddFieldsbtn at the top(3rd from left). Click on the AddNew Column(1st from left) to add the new field-less column. Now in the column’s properties in the Object Inspector, change the Title to ‘Edit ‘and set the Alignment to ‘taCenter’. Drop a button on the DBGrid and set the button in the Object inspector’ Visible to False’. Now that we have completed this stage we can continue with the coding of the project. On the Form object inspector add the event OnCreate and ‘procedure FormCreate’ is created. The steps that need to be followed:- This would be repeated if another button were added. //set up the button button1.Parent:= dbgrid1; button1.Caption:= 'Edit'; button1.ControlStyle:= button1.ControlStyle +[csClickEvents]; This would be repeated if another button was added. To correctly position the button on the DBGrid we will need the exact x,y coordinates of the cell in the column. As the standard DBGrid does not have this functionality straight out, fortunately TCustomGrid does. To access the class we need to add it to the unit as below. type TModDbGrid = class(TCustomGrid) end; To keep this modular in nature we will create a separate drawing procedure for the Button. If you add more buttons then this procedure can be called for as many button’s that you wish to add to your grid. Create the following procedure and declare it in the forms’ Private declarations section. procedure Buttondrawcolumncell (Sender: TObject;Button:Tbutton;Btncaption:string; Datacol,YourCol: Integer; Column: TColumn; Grid: TDBGrid; const Rect: TRect; State: TGridDrawState); And this is done in two main steps:- Step 1 Add the inactive button style, Step 2 Add the active styled button. Adding the Inactive Button For the first step we draw an inactive button on each row. To do this we will use the style services and procedure Drawframecontrol’. We draw the button as inactive by using ‘DFCS_INACTIVE or DFCS_ADJUSTRECT’. The Text is also drawn as ‘sfButtonTextdisabled’ .This will ensure that the text color matches the style. The call ’ TDBGrid(Sender).DefaultDrawColumnCell(R, DataCol, Column, State)’ will ensure that the grid is redrawn within the ‘Rect’ named R then placed in the coordinate position for each row determined by the datacol selected. Here is the code for this routine. var R,DataRect: TRect; style: DWORD; FButtonCol :integer; FCellDown:TGridCoord; begin R := rect; inflaterect(R,-1,-1); //Set up Button if (not (gdFixed in State)) and (DataCol = YourCol) then begin if styleservices.Enabled then TDBGrid(Sender).Canvas.Brush.Color := styleservices.GetStyleColor(scButtonDisabled) else TDBGrid(Sender).Canvas.Brush.Color := clbtnface; style := DFCS_INACTIVE or DFCS_ADJUSTRECT; DrawFrameControl( grid.Canvas.Handle, r, DFC_BUTTON, style ); TDBGrid(Sender).DefaultDrawColumnCell(R, DataCol, Column, State); TDBGrid(Sender).Canvas.Brush.Style:= bsclear; if styleservices.enabled then TDBGrid(Sender).Canvas.Font.Color := Styleservices.GetStyleFontColor(sfButtonTextdisabled) else TDBGrid(Sender).Canvas.Font.Color := clblack; DrawText(Grid.Canvas.Handle, PChar(BtnCaption), -1, r, DT_CENTER ); TDBGrid(Sender).DefaultDrawColumnCell(R, DataCol, Column, State); end; Adding the Real Button Here we make the button visible on the DBGrid and position the button in the active row. To do this we need to identify the DBGrid1 dataset’s ‘Recno’ and ‘record count’. We now make use of the Class CustomGrid’s function ‘CellRect’ to get the exact (x,y) coordinates of the cell. The TRect named Datarect’s position of the button is set and finally the button is made visible. if grid.DataSource.DataSet.RecNo <= grid.DataSource.DataSet.recordcount then begin if (not (gdFixed in State)) and (DataCol = YourCol) then begin DataRect := TModDbGrid(grid).CellRect((YourCol+1),TModDbGrid(grid).row); Inflaterect(datarect,-1,-1); button.Width:= Datarect.Width; button.left := (DataRect.right - button.width) ; button.top := DataRect.top ; button.height := (DataRect.bottom-DataRect.top); button.visible:= true; end; end; Now that the procedure has been completed, create the DBGrid1 ‘s ‘onDrawColumnCell ‘ event and add the button’s draw procedure ‘ButtondrawColumnCell’. As we have the Class CustomGrid exposed then we can re-color the background of the unused space in the grid. Add the following code here. //Change color if the white space below the grid to same as background Back on the Form, click on the button to add the Button1Click event and add the following code. procedure TForm4.Button1Click(Sender: TObject); begin showmessage('Edit this row'); end; Now you are ready to compile and enjoy your new refreshed DBGrid with a shiny working button. Feel free to download a working example from the Bayesean-Blog Github Repository… August 2019Delphi Delimited String to Fields June 2019Delphi A Professional VCL DBGrid Part Four May 2019Delphi A Professional VCL DBGrid Part Three April 2019Delphi A Professional VCL DBGrid Part Two March 2019Delphi A Professional VCL DBGrid Part One November 2018Delphi VCL Buttons in DBGrid October 2018Two Helper Apps for Delphi LibUSB September 2018Delphi Libusb Library Introduction August 2018Delphi Object directly to a Json string in a REST Client July 2018Delphi FMX Leaflet Plotter using OSM Maps June 2018C2PAS32 Convertor Application May 2018Delphi PDF Embedded viewer with PDF.js March 2018Delphi FMX - Changing TCharacter to TCharHelper January 2018Delphi FMX Dashboard using Chart.JS December 2017PHP Slim REST Server & Delphi Auth Part 5 November 2017Delphi FMX REST Client App Part 4 October 2017Delphi VCL REST Pricing Client App Part 3 September 2017Delphi REST VCL Client Basic Auth Part 2B August 2017Delphi REST Client Part 2A July 2017PHP REST Server and Delphi Client Intro June 2017Delphi SQLite Encryptor-Decryptor Tool May 2017Create a Visual IP Address Geolocation with PHP March 2017PHP Downloader using Countdown timer January 2017Morris Charts and PHP-PDO December 2016CSS to create a functional Toggle Button
OPCFW_CODE
package evaluators; import java.util.Map; import java.util.TreeMap; /** * Created by Matthew on 11/16/2017. */ public class Integrator implements Evaluator{ private Evaluator fx; private double start, dx; private static final double DEFAULT_DX = 0.00001; /** * defines the definite integral * x * ∫f(x)dx * start * @param fx * @param start * @param dx */ public Integrator(Evaluator fx, double start, double dx) { this.fx = fx; this.start = start; if(dx <= 0) { this.dx = DEFAULT_DX; }else{ this.dx = dx; } } public Integrator(Evaluator fx, double start) { this(fx, start, DEFAULT_DX); } /** * default lower limit of integration is 0 * x * ∫f(x)dx * 0 * @param fx */ public Integrator(Evaluator fx) { this(fx, 0, DEFAULT_DX); } /** * @param x * @return * x * ∫f(x)dx * start */ private TreeMap<Double, Double> cachedValues = new TreeMap<>(); @Override public double eval(double x) { return simpsons(x); } public double simpsons(double x) { //don't manually compute the integral if it is from a to a if(start==x) return 0; double trapezoidIntegral = 0; double midpointIntegral = 0; trapezoidIntegral += fx.eval(start)/2; double i; for(i = start+dx; i < x-dx; i += dx) { trapezoidIntegral += fx.eval(i); midpointIntegral += fx.eval(i-(dx/2)); } trapezoidIntegral += fx.eval(i)/2; trapezoidIntegral *= dx; midpointIntegral += fx.eval(i-(dx/2)); midpointIntegral *= dx; double simpsonsIntegral = (2*midpointIntegral+trapezoidIntegral)/3; return simpsonsIntegral; } /** * unused but implements interface * * @param start first x value evaluated * @param end last x value evaluated * @param stepSize changes steps in output, doesn't change dx * @return */ @Override public TreeMap<Double, Double> eval(double start, double end, double stepSize) { TreeMap<Double, Double> integrals = new TreeMap<>(); eval(start); eval(end); for(double i = start; i <= end; i+= stepSize) { Map.Entry<Double, Double> entry = cachedValues.floorEntry(i); double x0 = entry.getKey(); double integral = entry.getValue(); integral += trapezoidSum(x0, i); integrals.put(i, integral); } return null; } private double trapezoidSum(double a, double b) { return (b-a)*(fx.eval(a)+fx.eval(b))/2; } }
STACK_EDU
The cost of switching to electric cars? They simply provide the best products, in my opinion, and I like to work with the best. UPDATE: Transcoding the buffer key to base64 and back shows that the integrity of the authorization key remains intact throughout the encoding process, and that Basic [base64keyhash] is valid according to Auth Flow¶ The application-only auth flow follows these steps: An application encodes its consumer key and secret into a specially encoded set of credentials. http://peakgroup.net/cannot-verify/cannot-verify-the.php From a previous message in this forum I also confirmed that NSLookup can see all AD and GC servers fine. Leave a Reply Cancel reply Enter your comment here... TechNet Products IT Resources Downloads Training Support Products Windows Windows Server System Center Browser Office Office 365 Exchange Server SQL Server SharePoint Products Skype for Business See all products debysandra June 8, 2012 at 10:07 am Reply thanks for you comment.. https://social.technet.microsoft.com/Forums/lync/en-US/e16503be-e8ac-4345-93d8-4cbbc17d6dbe/cannot-start-lync-server-control-panel?forum=ocsplanningdeployment Is there a word for turning something into a competition? May 15th, 2013 10:05pm Check ifPrepare Active Directory is still showing Completed are not on Lync Deployment Wizard. Please look for other events that can give some specific information. You can place your own key/secretkey in there to try it, but I am unwilling to provide my own. Invalid requests to obtain or revoke bearer tokens¶ Attempts to: Obtain a bearer token with an invalid request (for example, leaving out grant_type=client_credentials). Monday, October 13, 2014 9:04 PM Reply | Quote Microsoft is conducting an online survey to understand your opinion of the Technet Web site. I changed one method signature and broke 25,000 other classes. After we lost power to our VM host our Lync server came back up with its control panel completely blocked. Unauthorized Authorization Failed Lync 2013 Not sure why they were behaving the way they were but since the reboots all has been fine. share|improve this answer answered Mar 11 '14 at 3:00 emcanes 1,14757 Do i declare this block in my controller? When you launch the Lync Server Control Panel Silverlight application from the Start menu or open https://lyncserver.domain.com/cscpin Internet Explorer, you must enter a username and password for an account that is office (2) Office 365 (5) SQL Server (2) VDI (1) Virtual (1) Windows Client (14) Gravatar Blog Stats 82,863 hits Archives May 2016 April 2016 March 2016 February 2016 June 2015 If you choose to participate, the online survey will be presented to you when you leave the Technet Web site.Would you like to participate? I am testin enterprise voice with all kind of sip trunks (and not supported ata's) A (non) solution is to reboot the production domain controllers, it does indeed always solve the Try again later. Is adding the ‘tbl’ prefix to table names really a problem? https://demagnum.wordpress.com/2015/01/06/unauthorized-authorization-failed-lync-control-panel/ Click Add and Close to add the https://lyncserver.domain.com site to the Trusted Sites zone. Remote Powershell Cannot Read The Rbac Roles Information From The Store I check the services, actually right the status not started, and I run start and I recheck all services connected with the Microsoft Lync Server 2010, any services not started so I can ping Related Categories: Lync Comments (2) Trackbacks (0) Leave a comment Trackback Anonymous June 7, 2012 at 4:43 pm Reply Your solution worked.. Get More Info GO OUT AND VOTE Two-headed version of \Rightarrow or \implies Explanation of a specific scene in "The Accountant" The usage of "le pays de..." Is Area of a circle always irrational hrdlnk 2014-10-18 09:53:49 UTC #4 Could you post a solved version of your request?I'm stuck with the same error 99 message.My app's name is hrdlnktestapp01 and here's my POST requestI'm using I've corrected the misconfiguration and rebooted the server. Browse other questions tagged node.js twitter twitter-oauth or ask your own question. Jason Marked as answer by Jason Trimble Wednesday, May 22, 2013 1:18 PM May 22nd, 2013 4:17pm We resolved this issue by rebooting all AD servers and the Lync servers. It looks like you're sending a few HTTP headers that aren't necessary for this request. useful reference Reinstalling Silverlight on the client had no effect. SQL Browser service was unable to process a client request. Jason Marked as answer by Jason Trimble Wednesday, May 22, 2013 1:18 PM Free Windows Admin Tool Kit Click here and download it now May 22nd, 2013 4:17pm We resolved this Resolution: Follow the resolution on the corresponding failure events. The request must include an Authorization header with the value of Basic Below are example values showing the result of this algorithm. Thank you Paolo beppegiuseppe Proposed as answer by Jay Brummett Saturday, November 27, 2010 10:44 PM Monday, October 18, 2010 10:23 AM Reply | Quote 1 Sign in to vote I have no errors or warnings about SQL. http://peakgroup.net/cannot-verify/cannot-verify-the-certificate.php The Application cannot verify your credential. The application cannot verify your credentials. The application cannot verify your credentials. Click Add and Close to add the https://lyncserver.domain.com site to the Trusted Sites zone. Not the answer you're looking for? Here's how to fix it: Launch Internet Explorer Double-click the Security Options zone information at the bottom of the IE window: This will open the Internet Security Properties window. About application-only auth¶ Tokens are passwords¶ Keep in mind that the consumer key & secret, bearer token credentials, and the bearer token itself grant access to make requests on behalf of The I used your resolution. Product catalog Count trailing truths How to decline a postdoc interview if there is some possible future collaboration? No trackbacks yet. after I check in the event viewer I get some error in the SQL Browser. No further replies will be accepted. Why is this C++ code faster than my hand-written assembly for testing the Collatz conjecture? I logged on as the new user, startedIE browser as administrator and pointed to"https://server.domain.com/cscp ", andlogged on. When you launch the Lync Server Control Panel Silverlight application from the Start menu or open https://lyncserver.domain.com/cscpin Internet Explorer, you must enter a username and password for an account that is It's an Enterprise pool with all the relevant A host created for admin sip meet dialin (poolname and hostname of the collocated machine) _tlsinternal created. Thanks for your help. You may then receive the following IIS error: Unauthorized: Authorization failed. The user I tried to gain access to Lync Control Panel with is a member of all CS* security group and RTC*security group.
OPCFW_CODE
Raster Data Management with ArcSDE By James Neild, Esri Product Management Editor's note: With the release of ArcGIS 8.2 and ArcIMS 4.0, Esri has a complete solution for storing vector, raster, tabular, and metadata data in a relational database management system (DBMS). The combination of ArcSDE, Esri's gateway for storing raster information in a DBMS, the ArcGIS Desktop applications, and ArcIMS provide a comprehensive approach for managing and distributing raster data. This article briefly describes how ArcSDE stores and manages raster information and the tools available for data management. Knowledge of basic raster concepts is assumed. Introducing Raster Data Management A raster data management system is a collection of scalable tools that allow users to store, access, analyze, and extract raster data on demand. ArcGIS provides tools for all aspects of raster data management, from ArcGIS Desktop applications for access, management, and analysis to ArcIMS for data distribution and extraction to ArcSDE for raster management and access. Client applications that can access this raster data through ArcSDE include ArcGIS, ArcIMS, and custom applications built with the ArcSDE C API. Secure, fast, real-time multiuser access to seamless raster datasets that can range in size from several megabytes to many terabytes requires a DBMS. Although ArcSDE may not be necessary if raster storage and access requirements are less demanding, ArcSDE can act as a central metadata storage system that provides information on data location as well as reduced resolution "quick look" images of the data. The requirements for different uses of ArcSDE for managing raster data vary depending on the type of organization and its users. ArcSDE is commonly used for managing raster data used in basemaps and as feature attributes (i.e., photographs of features such as buildings or valves) and in image repositories maintained by data providers. The following are just a few examples of how ArcSDE improves raster management for these types of applications. A water company, with a lot of legacy data stored on paper and Mylar maps, needed to supply this data as background images that would be combined with vector data into a seamless hybrid map. Although these images will eventually be replaced by vector data, this process will take time to complete. The legacy data was scanned and 3,000 one-bit TIFF images were generated. The resulting files-about 40 GB worth of data-is centrally maintained using ArcSDE and supplied as background images to users running ArcMap. These users are inserting new vector data as well as updating existing vector data with new information. Assessor's offices, water districts, and other organizations often have photographs of assets, such as buildings, that need to be linked to spatial features to provide additional information on those features. ArcSDE can be used to manage the raster data for these types of inventory applications. An association of governments needed a central repository for imagery that could be easily accessed by members as well as citizens. To accomplish this goal, the images were converted to downloadable formats and the association set up a Web site that provided access to these image files. The association's data holdings-nearly a terabyte of data stored as eight-bit, three-band, one-meter DOQQs-is currently held by the association and managed using ArcSDE. Raster Storage Architecture in ArcSDE Rasters are stored in a series of business and user tables in ArcSDE. These system tables, listed in Figure 1, are maintained by ArcSDE and should not be directly modified. When storing raster information, the block table will grow the fastest and remain the largest because it stores binary large objects (BLOBs). The other tables will remain relatively small in size. ArcSDE's storage parameters allow the user to specify how the data will be stored. The parameters for pyramids, tile size, and compression can affect storage requirements and client application performance. Determine baseline performance for ArcSDE before changing the default settings for these parameters so that any gains or losses in performance can be measured. |Figure 2: When pyramids are created, the spatial extent remains the same.| ArcSDE generates pyramids-reduced resolution representations of data-to speed up display of raster data. Pyramids allow ArcSDE to fetch only the data at the specified resolution required for the display. Pyramid building is performed on the ArcSDE server side whenever the underlying raster is modified or updated. For large datasets, this can take a long period of time and should be considered when deciding whether to mosaic the data or use raster catalogs. If the original data is compressed, the server will first decompress the data, build the pyramids, and compress the data again to insert into the block table. The base layer of the pyramid has the highest resolution. Resampling the original data creates pyramid layers. One of the three supported resampling methods is used to instruct the server how to resample the data. The type of data determines which of the three methods--nearest neighbor, bilinear interpolation, and cubic convolution--is most suitable for a specific dataset. - Nearest neighbor assignment should be used for nominal or ordinal data. For these types of data, each value represents a class, member, or classification (categorical data, such as a land use, soil, or forest type). - Bilinear interpolation interpolates four adjacent pixels and should be used for continuous data such as elevation, slope, intensity of noise from an airport, and salinity of the groundwater near an estuary. - Cubic convolution interpolates 16 adjacent pixels and should be used for continuous data such as satellite imagery or aerial photography. The tile size controls the number of pixels stored in each database BLOB field and is specified in x and y pixels when loading the data. The default value of 128 pixels x 128 pixels should be satisfactory for most applications. The optimal tile size setting depends on factors such as data type (bit depth), database settings, and network settings. A smaller tile size, such as 100 x 100, will result in smaller BLOBs and more records in the raster block table, which will slow down queries. A larger tile size, such as 300 x 300, will result in larger BLOBs that require more memory to process although fewer records will be created in the block table. Experiment with tile size before changing the default setting. Data compression, optional but recommended, is performed on tiles as they are stored in the database. The two compression methods available are LZ77 and JPEG. The LZ77 algorithm, the same method used for PNG image format and ZIP compression, produces a lossless compression so that the unique values of cells in a raster dataset can be recovered. JPEG compression can have very high ratios but is lossy. Using this method, the values of cells in the raster dataset may be changed slightly. JPEG compression can only be applied to eight-bit data without a color map. The user can specify a quality setting for JPEG compression that ranges from 5 to 95 with 95 producing the best quality image. The default setting is 75. Compressed data requires less storage space and produces smaller files resulting in better display performance for client applications. The amount of compression depends upon the data. The fewer unique cell values, the higher the compression ratio. The ArcSDE client performs compression and decompression. The ArcSDE client sends compressed data to the server at loading, and the server always returns compressed data to the client at retrieval. If retention of pixel values is important (e.g., categorical data or data used for analysis) use LZ77 compression. If individual pixel values are not important, as in the case of simple background images, use JPEG compression. Continued on page 2
OPCFW_CODE
March 22nd, 2022 | Updated on July 4th, 2022 AZ 104 certification examination measures the competence of the Azure administrator. The exam tests your knowledge of handling cloud services that span security, storage, networking, cloud, and other cloud capacities of Microsoft Azure. The older Microsoft certification exams like AZ-100. AZ-101, AZ-102, and AZ-103 expired when Microsoft released a new role-based Microsoft Azure Certification AZ-104 exam. So, if you wish to become a Microsoft certified Azure administrator, you will have to prepare for the exam. However, before you dive into the preparation, you should know about the exam. Overview Of AZ 104 Certification Exam The complete name of the certification is Microsoft Azure Administrator Certification. AZ 104 certification exam contains forty to sixty multiple-choice questions. Moreover, the exam of Azure certification cost is one hundred and sixty-five USD. You don’t have any prerequisites for the examination. The validity of the exam is two years. You can find the exam in English, Korean, Japanese, and Chinese languages. Objectives Of Microsoft Azure AZ 104 Examination Azure 104 certification exam combines the previous two Microsoft Azure Certifications exams, which are AZ-100 and AZ-101. Therefore, you have to cover more syllabus to prepare for the AZ 104 exam. You have to cover the subsequent topics. - Implement and manage storage. (15-20%) - Handle Azure governance and identities. (15-20%) - Configure and handle virtual networking. (25-30%) - Deploy and manage Azure compute resources. (20-25%) - Monitor and back up the resources of Azure. (10-15%) How To Prepare For AZ 104 Certification Exam - Check out the official page of the Microsoft AZ 104 exam: Before you start your preparation, you should check out the official page on the Microsoft website. You can find the authentic, exact and updated information on the detail page of the AZ 104 exam. You can find out about the registration fee, eligibility criteria, objectives of the Microsoft cloud certification exam, scheduling options, etc. - Comprehend the examination topic: As each new exam contains new topics, you have to understand the objectives of the AZ 104 certification exam. You can find out about the weight of different modules on the official detail page of the exam. You can be well-prepared for the Azure certification training if you are thoroughly aware of each module and domain. - Take training online: Among the effortless and convenient ways to prepare for the exam is online training. You will learn about the concepts of each module from industry experts via online training. In addition, Microsoft provides a free portal where you can learn at your own pace. Microsoft also has a personalized instructor-led training facility which you have to pay for preparing for the exam. - Reference books and online resources: The Microsoft press has published the Microsoft Azure Administrator Exam Reference AZ 103 book as part of the reference series for the exam. The writers of the books are subject matter experts who can assist you in learning and passing the exam. The authors of this book are Scott Hoag, Michael Washam, and Jonathan Tuliani. You can find the book from the press store of Microsoft. The book covers the previous versions of Azure certifications, so it contains all the modules of the Azure 104 exam. - Online discussion forums and study groups: You can join online discussion forums and study groups to prepare for the exam. It can allow you to connect with other candidates and those who are already Azure administrators. You can get resolutions to your queries and answers to your questions in these forums. So, it is necessary for you to join some forums during your preparation for the AZ 104 exam. - Practice with mock exams: You should boost your confidence and prepare for the AZ 104 exam in the final step. When you think that you have covered all the steps of the preparations and gone through all the resources, you should focus your attention on the good Simulators. The Simulators for AZ 104 examination creates an environment of an actual exam of Microsoft Azure Certification. You can find out about your strengths and weaknesses through these Simulators, and you can enhance your strength and improve your weakened points. You can avail many websites online that provide desktop mosque exams and online Simulators to prepare for the exam. During the exam, you should follow some pointers that can help you excel at your performance, no matter how nerve-wracking the exam is for you. First, try to keep yourself as relaxed as you can during the exam. It would be best if you arrived early at the exam centre. So, it will give you enough time to fulfil your formalities and concentrate on your examination. Furthermore, ensure that you have taken sufficient rest before sitting for the test. If you desire to advance in your career as an Azure administrator, you should prepare yourself for taking the AZ 104 certification exam. The exam enhances your skill and represents them to your employers, which gives you the advantage to stand out. Relevant Resources On Certifications & Exam Preparations If this blog interested you, check out the following articles:
OPCFW_CODE
A tool for breaking down a project in manageable tasks, and to estimate the overall time needed to complete them. To create a new project, go to the front page and click the "Create project" button. Your new project has a unique URL that you can bookmark or share. Anyone with access to this URL has also full rights to read and edit the project. Let's say you are preparing your vacation. Type in a suitable name for your project in the text field that appears, for example "Vacation". You can change it at any time. You should see a button with a plus symbol and a dotted outline. Click it to add a task at the top level of your project. A white box with an input field will appear. Type in a short title for your task and hit [enter] or click somewhere else. Do this as many times as needed. Top level tasks could be: As you will have noticed, the tasks are listed top to bottom in a single column. You can drag & drop tasks to reorder them to indicate priority or dependencies. Each of the tasks have two buttons in the lower right corner. The button with a pen icon will open up a dialog with more options, such as a color picker and a text area for entering a more detailed description. Click the pen button on "Find interesting destinations". In the dialog, click the "Description" text area. This is a great place to type in reminders and ideas, like "Ask Mike about that place he went last year.". Again, click the pen button on "Arrange cat sitter" and set the color to red so you don't forget like last time. I'm pretty sure your kids noticed Mittens had a new color. The button with the plus icon will add a new subtask. Click the plus button on the task "Decide destination", and call it "Check prices". This new task is placed in a new column, next to the parent task it belongs to. The astute reader might notice that "Find interesting destinations" should actually be part of the task "Decide destination". Simply drag & drop it over to the second column, right over "Check prices". As you drop it, the box labeled "Decide destination" will grow in height to accommodate it's two subtasks. Any task can contain more subtasks this way, to any depth. Or as many as will fit on your screen. You will continue breaking down tasks into ever smaller subtasks until they are so small they are trivial, or you just don't know enough about them yet. In the edit dialog opened by the pen button, there are three fields for entering the time the task takes to complete. The first two are the min/max estimate. As you fill in the min estimate, the max will be suggested to you automatically. Entering one manually is optional. The last one, labeled "Actual" is for how much time a task actually took. Entering it will automatically mark your task as "Done". This actual time taken will override the estimate if there is one. It is usually a good idea to only enter estimates for "leaf tasks" - tasks that have no subtasks, so that they are at the far right of their row. The format for entering times is: number unit [, ...] number is an integer and unit is any one of [year(s), month(s), weeks(s), day(s), hour(s), min] or their shorter variants [y, mn, w, d, h, m], respectively. Whatever you enter will be parsed, converted to seconds and formatted in the most sensible way. For now, conversion between units assume Swedish conditions where: As soon as you enter the first estimate or actual time taken, Estimator will calculate projections for the entire project. At first they will most likely be outrageously improbable, but as you fill in more numbers, they will become more accurate. The projection of a task with subtasks in it will be the sum of it's subtasks. If the supertask itself has an estimate, the larger of them is used. The assumption is that a task is equal to the sum of it's parts, or sometimes more. When a task has no estimates set on itself or any of it's subtasks, it is given the average estimate of the ones that are actually set among it's siblings. This assumes that you have broken your project down into roughly equally large bites. If all else fails, time is projected down from the supertask and divided equally over the subtasks. Simply put, if ten tasks should take 30 minutes together, each one should take about 3 minutes.
OPCFW_CODE
Thanks for the reply guys, here is an update on my DVD Burning pblm. I went out today and puchased a Fujifilm 10 pack of CD-R's that were rated at 48x. when i did nero info on the disk it said the manufacturer was TY's and they were rated at 32x. burnt a 50 minute elvis cd in aboyt 5 minutes (roughly 2.5 minutes of that was the actual burn part). the status screen in nero express 6 was saying it was burning at 32x so i guess this pblm is OK now. My 4 year old cd-r's only supported 8x when i did a nero info check on them. I still have the pblm with not being able to burn DVD's, (video dvd's and plain data dvd's). I was originally trying with Maxwell DVD+R 8x discs (nero info shows Prodisc R03, supported speeds 12x/8x/6x/4x/2.4x) Today i bought some TDK-R 8x disks (nero info shows TTG02, supported speeds 8x/6x/4x/2x) The pblm is the same no matter if i use the +R discs, the -R disks, tried at all suported speeds, tried making plain data file backup with nero6 OEM (220.127.116.11 latest version). Also tried making a Movie DVD using Squeezit to remove copy protection and reauther to fit on a single sided DVD5. I get the same results. What i get is the nero status screen shows the initial data caching while it procceses the data/video files. Then it tells me to put in my blank media. after i put in the blank DVD it says it is starting to burn at 12x for +r, 8x fpr my -r discs. Then it just freeezes there, the buffer status dosn't change , no additional burn status. i wait 5 minutes and still nothing changes. I try and cancel the job or quit the nero app and windows xp tells me the app is not responding. I even do a ctrl/alt and try to manaully end the nero app and the app does not respond. I have to power cycle or hold the power button in for 4 seconds to force a reboot to have windows reload at this point because the nero app is fozen and won't restart So it seems to me that it may be a sw pblm between nero 6 and xp or my nd3500 drivers. Running xp home sp2, stock2.18 drivers for the nec3500, and latest nero6 code 18.104.22.168. I even tryed to backup just a small 100M data file at 2x speed and the same thing happens, the nero app freezes when it trys to start the dvd burn process. But the same nero6 app can burn CD-r's at 32x fine on the same nec3500. I don't get it. This is very frustrating to me because the main reason i bought this PC was for gaming and backing up Video DVD's. As far as making sure my mobo bios is using uDMA, from what i can tell reding the Abit AV8 manual there is no way to change this in the bios, both IDE channels support uDMA. does anyone that owns a AV8 mobo confirm this for me, like i said i really don't understand bios option that well, but when i snooped around in the bios i could not see any options to change it between DMA and uDMA. The NEC 3500 is on IDE ch 2, using 80 pin cable, and using the master end of the cable. The slave (middle of the cable is not connected to anything. Any one with any furthuer sugestions would be appreciated. - Can anyone sugest another DVD burning application that i could try (prferably freeware) so i could try to pin this down between a nero6 software bug with my nec3500 or bad hardware. Has anyone seen this before where bad hardware in a dvd drive would allow it to burn cd-R/RW's, but not allow it to burn dvd's. MY RIG - A64 3000+ winchester (90nm core, S939) @ 1.8GHz Abit AV8 mobo (S939) 512Mbps (256x2) PC3200 80G Seagate SATA 150 Hard Drive NEC 3500 DVD Burner (16x) BFG 6800OC @ 350/700 X-infinity Gaming case w/ 420W PS
OPCFW_CODE
import CodableKit /// Describes a relational join which brings columns of data from multiple entities into one response. /// /// A = (id, name, b_id) /// B = (id, foo) /// /// A join B = (id, b_id, name, foo) /// /// joinedKey = A.b_id /// baseKey = B.id public struct QueryJoin { /// Join type. /// See QueryJoinMethod. public let method: QueryJoinMethod /// Table/collection that will be accepting the joined data /// /// The key from the base table that will be compared to the key from the joined table during the join. /// /// base | joined /// ------------+------- /// <baseKey> | base_id public let base: QueryField /// table/collection that will be joining the base data /// /// The key from the joined table that will be compared to the key from the base table during the join. /// /// base | joined /// -----+------- /// id | <joined_key> public let joined: QueryField /// Create a new Join public init(method: QueryJoinMethod, base: QueryField, joined: QueryField) { self.method = method self.base = base self.joined = joined } } /// An exhaustive list of /// possible join types. public enum QueryJoinMethod { /// returns only rows that appear in both sets case inner /// returns all matching rows from the queried table _and_ all rows that appear in both sets case outer } // MARK: Support public protocol JoinSupporting: Database { } // MARK: Query extension DatabaseQuery { /// Joined models. public var joins: [QueryJoin] { get { return extend["joins"] as? [QueryJoin] ?? [] } set { extend["joins"] = newValue } } } // MARK: Join on Model.ID extension QueryBuilder where Model.Database: JoinSupporting { /// Join another model to this query builder. public func join<Joined>( _ joinedType: Joined.Type = Joined.self, field joinedKey: KeyPath<Joined, Model.ID>, to baseKey: KeyPath<Model, Model.ID?> = Model.idKey, method: QueryJoinMethod = .inner ) throws -> Self where Joined: Fluent.Model { let join = try QueryJoin( method: method, base: baseKey.makeQueryField(), joined: joinedKey.makeQueryField() ) query.joins.append(join) return self } /// Join another model to this query builder. public func join<Joined>( _ joinedType: Joined.Type = Joined.self, field joinedKey: KeyPath<Joined, Model.ID?>, to baseKey: KeyPath<Model, Model.ID?> = Model.idKey, method: QueryJoinMethod = .inner ) throws -> Self where Joined: Fluent.Model { let join = try QueryJoin( method: method, base: baseKey.makeQueryField(), joined: joinedKey.makeQueryField() ) query.joins.append(join) return self } /// Join another model to this query builder. public func join<Joined>( _ joinedType: Joined.Type = Joined.self, field joinedKey: KeyPath<Joined, Model.ID?>, to baseKey: KeyPath<Model, Model.ID>, method: QueryJoinMethod = .inner ) throws -> Self where Joined: Fluent.Model { let join = try QueryJoin( method: method, base: baseKey.makeQueryField(), joined: joinedKey.makeQueryField() ) query.joins.append(join) return self } /// Join another model to this query builder. public func join<Joined>( _ joinedType: Joined.Type = Joined.self, field joinedKey: KeyPath<Joined, Model.ID>, to baseKey: KeyPath<Model, Model.ID>, method: QueryJoinMethod = .inner ) throws -> Self where Joined: Fluent.Model { let join = try QueryJoin( method: method, base: baseKey.makeQueryField(), joined: joinedKey.makeQueryField() ) query.joins.append(join) return self } }
STACK_EDU
Manage a Microsoft Teams Rooms console settings remotely with an XML configuration file This article discusses remote management of the default settings used by a Microsoft Teams Rooms device, including applying a custom theme. Updating a master XML file and sending copies to the consoles you manage makes it possible for you to change default settings for remotely managed devices. This article discusses how to create such a file, and links to discussions of how to place them as needed on the remotely managed devices. Using this method, you can also implement Custom Themes on your Microsoft Teams Rooms consoles. Creating an XML configuration file The table below explains the elements shown in this sample SkypeSettings.xml (this is a required file name) configuration file. <SkypeSettings> <AutoScreenShare>true</AutoScreenShare> <HideMeetingName>true</HideMeetingName> <UserAccount> <SkypeSignInAddress>RanierConf@contoso.com</SkypeSignInAddress> <ExchangeAddress>RanierConf@contoso.com</ExchangeAddress> <DomainUsername>Seattle\RanierConf</DomainUsername> <Password>password</Password> <ConfigureDomain>domain1, domain2</ConfigureDomain> </UserAccount> <IsTeamsDefaultClient>false</IsTeamsDefaultClient> <BluetoothAdvertisementEnabled>true</BluetoothAdvertisementEnabled> <SkypeMeetingsEnabled>false</SkypeMeetingsEnabled> <TeamsMeetingsEnabled>true</TeamsMeetingsEnabled> <DualScreenMode>true</DualScreenMode> <SendLogs> <EmailAddressForLogsAndFeedback>RanierConf@contoso.com</EmailAddressForLogsAndFeedback> <SendLogsAndFeedback>true</SendLogsAndFeedback> </SendLogs> <Devices> <MicrophoneForCommunication>Microsoft LifeChat LX-6000</MicrophoneForCommunication> <SpeakerForCommunication>Realtek High Definition Audio</SpeakerForCommunication> <DefaultSpeaker>Polycom CX5100</DefaultSpeaker> </Devices> <Theming> <ThemeName>Custom</ThemeName> <CustomThemeImageUrl>folder path</CustomThemeImageUrl> <CustomThemeColor> <RedComponent>100</RedComponent> <GreenComponent>100</GreenComponent> <BlueComponent>100</BlueComponent> </CustomThemeColor> </Theming> </SkypeSettings> If the XML file is badly formed (meaning a variable value is of the wrong type, elements are out of order, elements are unclosed, and so on), settings found up to the point where the error is found are applied, then the rest of the file is ignored during processing. Any unknown elements in the XML are ignored. If a parameter is omitted, it remains unchanged on the device. If a parameter's value is invalid, its prior value remains unchanged. |<SkypeSettings>||Container for all elements.||Required.| |<AutoScreenShare>||Boolean ❷||First ❶||If true, auto screen share is enabled.| |<HideMeetingName>||Boolean ❷||First ❶||If true, meeting names are hidden.| |<UserAccount>||Container||First ❶||Container for credentials parameters. The sign in address, Exchange address, or email address are usually the same, such as RanierConf@contoso.com.| |<SkypeMeetingsEnabled>||Boolean ❷||First ❶||Enabled by default.| |<SkypeSignInAddress>||String ❸||The sign in name for the console's Skype for Business device account.| |<ExchangeAddress>||String ❸||The sign in name for the console's Exchange device account. If the ExchangeAddress is omitted, the SkypeSignInAddress will not automatically be re-used.| |<DomainUsername>||String ❸||The domain and user name of the console device, for example Seattle\RanierConf.| |<Password>||String 3||The password parameter is the same password used for the Skype for Business device account sign-in.| |<ConfigureDomain>||String ❸||You can list several domains, separated by commas.| |<TeamsMeetingsEnabled>||Boolean ❷||First ❶||Disabled by default. The XML file is considered badly formed if both <SkypeMeetingsEnabled> and<TeamsMeetingsEnabled> are disabled, but it's acceptable to have both settings enabled at the same time. |<IsTeamsDefaultClient>||Boolean ❷||First ❶||Disabled by default.| |<BluetoothAdvertisementEnabled>||Boolean ❷||First ❶||Enabled by default.| |<DualScreenMode>||Boolean ❷||First ❶||If true, dual screen mode is enabled. Otherwise the device will use single screen mode.| |<EmailAddressForLogsAndFeedback>||String ❸||This sets an optional email address that logs can be sent to when the "Give Feedback" window appears.| |<SendLogsAndFeedback>||Boolean ❷||If true, logs are sent to the admin. If false, only feedback is sent to the admin (and not logs).| |<Devices>||Container||First ❶||The connected audio device names in the child elements are the same values listed in the Device Manager app. The configuration can contain a device that does not presently exist on the system, such as an A/V device not currently connected to the console. The configuration would be retained for the respective device.| |<MicrophoneForCommunication>||String ❸||Sets the microphone that will be used as the recording device in a conference.| |<SpeakerForCommunication>||String ❸||Device to be used as speaker for the conference. This setting is used to set the speaker device that will be used hear the audio in a call.| |<DefaultSpeaker>||String ❸||Device to be used to play the audio from an HDMI ingest source.| |<Theming>||Container||First ❶||One of the features that can be applied using an XML file is a Custom Theme for your organization. You will be able to specify a theme name, background image, and color.| |<ThemeName>||String ❸||Used to identify the theme on the client. The Theme Name options are Default, one of the provided preset themes, or Custom. Custom theme names should always use the name Custom . The client UI can be set at the console to the Default or one of the presets, but applying a custom theme must be set remotely by an Administrator. Preset themes include: To disable the current theme, use "No Theme" for the ThemeName. |<CustomThemeImageUrl>||String ❸||Required if using a custom theme, otherwise optional. See the Custom Theme Images section below for more details on the custom theme image.| |<CustomThemeColor>||Container||Container for the <RedComponent>, <GreenComponent>, and <BlueComponent> values. These values are required if using a custom theme.| |<RedComponent>||Byte (0-255)||Represents the red color component.| |<GreenComponent>||Byte (0-255)||Represents the green color component.| |<BlueComponent>||Byte (0-255)||Represents the blue color component.| ❶ All of the first-level elements are optional. If a first-level element is omitted, all of its child parameters remain unchanged on the device. ❷ A boolean flag can be any of the following: true, false, 0, or 1. Boolean or numeric values left empty might render the XML malformed so there would be no changes to the settings. ❸ If a string parameter is present, empty, and empty is a valid value, the parameter is cleared on the device. Manage console settings using an XML configuration file At startup, if a Microsoft Teams Rooms console finds an XML file named SkypeSettings.xml at the location C:\Users\Skype\AppData\Local\Packages\Microsoft.SkypeRoomSystem_8wekyb3d8bbwe\LocalState, it will apply the configuration settings indicated by the XML file then delete the XML file. Depending on how many Microsoft Teams Rooms devices your enterprise has and how you choose to manage to configure them, there are a number of ways to place the XML configuration file. Once the file is pushed to the console, restart it to process the configuration changes. The XML configuration file is deleted after it is successfully processed. The management methods suggested for Microsoft Teams Rooms devices are discussed in: You are free to use any method you like so long as you can use it to transfer files and trigger a restart on the console device. The file must be readable, writable, and delete-able by the device's local user account (preferably, it should be owned by and have full privileges granted to that user). If the file permissions are not set correctly, the software may fail to apply the settings, may fail to delete the file upon successful processing, and could even potentially crash. Custom Theme Images The custom theme image file must be placed in C:\Users\Skype\AppData\Local\Packages\Microsoft.SkypeRoomSystem_8wekyb3d8bbwe\LocalState, just enter the file name and extension in the <CustomThemeImageUrl> variable. The image file should be exactly 3840X1080 pixels and must be one of the following file formats: jpg, jpeg, png and bmp. If your organization wants a custom image, a graphic designer will find our Custom Theme Photoshop Template useful. It contains further detail on where to place various elements in a theme image and what areas appear on consoles and displays. The XML configuration file must be updated at device startup to recognize the theme image. Once the new XML file is processed and deleted, the theme graphic file will be deleted from the directory. Send feedback about:
OPCFW_CODE
Inspired by a recent post from Kasey Clark in which he plotted all his runkeeper runs (tracked via GPS) on a single map, I thought I'd explore my own running from the last few years and see how it might be visualised in an interesting manner. Using his method, I exported all my runs as one big zip file of gpx files (found under your profile) then imported them all into Google Earth. Here is an image of all my runs around Sydney's inner west over the last few years. Most of the time I run along the Cooks River. I also had a bit more fun with it, and for this you will need the Google Earth plugin for your browser - if you can see the following images you already have it, and if not then there should be a link for you to get it. The city2surf is one of the world's biggest fun runs and I have done it the last few years. By creating a Google Earth tour, you can create an animation of your runs. I tweeked the gpx code in a text editor (and Excel) to make my 2010 and 2011 runs start at the same time, and then by using the tour gadget, you can embed the animation on your website. Perhaps over time I will add further year's runs to this animation. You'll need somewhere to host the exported kml files from Google Earth. There is a small lag at the start of the video and if it doesn't work, see the video on youtube. I'm looking to knock off that 2011 time this year in a few weeks! Edit 1: I have added a friend from 2010 and 2011. The next tour doesn't look so great but it would look great in San Francisco or New York City. Google Earth has 3D buildings built in, and by turning these on, you can visualise your runs in 3D. The following shows my Bridge Runs across the Sydney Harbour Bridge and finishing at the Opera House. Runkeeper doesn't quite get the elevation of the bridge correct so it looks like I'm running across water. As mentioned, in cities where there are lots of rendered 3D buildings, this would look great. I haven't bothered yet to tweek the start times for each of the races to all be exactly the same as it's a bit fiddly, but you get the point. Again there is a small lag and if it doesn't work, see the video on youtube. If you can't see the above videos, and the Google gadget seems really buggy, I have uploaded them to youtube and there you can see city2surf and bridge runs videos.
OPCFW_CODE
OpenWallet for Android The goal of this project is to build the best free, libre, and open source light wallet for multiple cryptocurrencies (Bitcoin, Ethereum, Ripple, etc) on Android. Security and usability are, of course, the priorities. For this reason, your private keys never leave the device. Luckily, this wallet is compliant with BIP32, BIP39, and BIP44. A single 24-word mnemonic phrase may be used to recover each and every one of your cryptocurrency wallets. Contributions aren't just welcome, they're financially encouraged! (Well, they will be soon. Hang tight!) Eventually, we hope to set up a cryptocurrency-based (likely BTC or ETH) rewards system for contributions, be they new features, new translations, or new coins. As always, all you've gotta do is fork and pull! By the way, if you'd like to add new coins, check out this document provided by Coinomi: document. You should find that a lot of Coinomi's documentation applies to OpenWallet as well. This is because OpenWallet was forked from Coinomi before it ditched the open source model in favor of a more proprietary, "source-available" model. OpenWallet is, and forever will be, free, libre, and open source! Anyway, back to the coins. Generally you'll need: - Electrum-server support - OpenWallet core support - A beautiful vector icon - BIP32, BIP39, and BIP44 compliance How to Go About Building an Independent Fork of This App First off, ensure that your client device is running Android Lollipop or later. Second, ensure that your client device is running an ARM processor as this project is currently incompatible with x86_64/amd64. Start up Android Studio and import this repository (openwallet-android) in its entirety (click on settings.gradle). When that's done, install Version 21 of the SDK. Note that this project must be built with JDK 7 as it is currently incompatible with JDK 8. Once built, enable developer options on your Android smartphone as well as USB debugging. Plug your smartphone into your computer, and install your shiny new app through Android Studio. How to Go About Releasing an Independent Fork of This App - Change the following: - in strings.xml app_name string to "OpenWallet" and app_package to com.openwallet.wallet - in build.gradle the package from "com.openwallet.wallet.dev" to "com.openwallet.wallet" - in AndroidManifest.xml the android:icon to "ic_launcher" and all "com.openwallet.wallet.dev." to "com.openwallet.wallet." - remove all ic_launcher_dev icons with - setup ACRA and ShapeShift - Then perform the following in Android Studio: - Build -> Clean Project and without waiting... - Build -> Generate Signed APK and generate a signed APK. ... and now you can grab yourself a nice cup of tea. - Test the APK. Install the APK with adb install -r wallet/wallet-release.apk Upload everything to the Play Store, and continue checking for any errors. If all goes well, you're good to go! Create a git release commit: - Create a commit with a detailed description - Create a tag with the version of the released APK using git tag vX.Y.Z <commit-hash>or something similar All previous history is available at the following repository, which is an unmodified fork of Coinomi prior to its license change: https://github.com/CosmoJG/open-coinomi-android
OPCFW_CODE
It should be this way, considering that unnamed parameters are outlined by posture. We are able to determine a functionality that requires these capabilities ought to take a smart pointer only if they have to participate in the widget’s life span administration. Or else they should accept a widget*, if it may be nullptr. In any other case, and Preferably, the purpose should really accept a widget&. Use algorithms which might be created for parallelism, not algorithms with unneeded dependency on linear evaluation Especially, we’d genuinely like to obtain a few of our rules backed up with measurements or much better examples. The C programming language utilizes libraries as its Principal method of extension. In C, a library can be a list of capabilities contained in just a single "archive" file. Each and every library typically has a header file, which includes the prototypes in the capabilities contained in the library Which may be utilized by a application, and declarations of special data forms and macro symbols utilized Using these functions. A whole report with the chase international functions, describing in regards to the approaches by which it made throughout the recession. The place feasible, computerized or static allocation is often most straightforward as the storage is managed from the compiler, liberating the programmer of the doubtless mistake-vulnerable chore of manually allocating and releasing storage. On the other hand, several details structures can alter in size at runtime, and given that static allocations (and automated allocations right before C99) should have a fixed sizing at compile-time, there are various circumstances through which dynamic allocation is important. Some regulations are tough to check mechanically, but they all meet the minimal criteria that an expert programmer can location lots of violations with out excessive trouble. This area seems to be at passing messages so that a programmer doesn’t really need to do specific synchronization. When deep copies of objects need to be designed, exception basic safety really should be taken into consideration. One method to reach this when source deallocation never ever fails is: Other regulations articulate normal concepts. For these extra normal rules, much more in depth and certain principles offer partial examining. In ideal contexts in source code, including for informative post assigning to your pointer variable, a null pointer constant may be written as 0, with or with no specific casting into a pointer style, or as the NULL macro described by several standard headers. In conditional contexts, null pointer values Assess to Bogus, even though all other pointer values Appraise to real. What appears to be into a human similar to a variable without a identify is for the compiler a press release consisting of a temporary why not check here that straight away goes from scope. By reusing s (passed by reference), we allocate new memory only when we must extend s’s capability.
OPCFW_CODE
Working on new methods and tools to identify browser exploits, I recently came across a common question again in a forum: "Is it possible to detect what browser extensions I have installed?" That information would be of value to various people for several reasons. Online attackers and snoops stand to gain most from it. Examples: Besides the usual suspects, who else would benefit from knowing which browser extensions are installed on a given client? Take online advertising firms, for example. Addon insights can help them build target profiles based on user interests and preferences. Online advertisers and malicious actors alike need as much data from the client as possible, and the local browser stands at attention to do their bidding. Detecting extensions is only one of many ways to distinguish one machine's environment from another. Ideally, traditional web browsers and their extensions should be built with your security and privacy in mind and shouldn't be detected by any external service. Since that is not the case, we better face the harsh reality together and ask: How much - or little - does it take to query what extensions are installed on a browser and get a list of them? Let's dive in. Google Chrome and Mozilla Firefox are the two browsers we're going to run tests against. They have the most extensive collection of various browser plugins you can install from their respective web stores. The way Google Chrome handles extension queries is through its chrome-extension:// URI scheme. This URI operator handles everything related to a browser extension. It works the same way with all extensions present on Chrome: chrome-extension://<UNIQUE EXTENSION ID>/<RESOURCE BEING REQUESTED> Just like any ordinary URL operator - file:///, https://, http://, et al. - we can load its resources directly in the browser. But before we do any of that, we need the extension ID for the extension we're trying to detect. Here's how we obtain it: We can look up any extension on Chrome in the Google Chrome web store and find the extension's unique ID in the URL, as in the screenshot above. This is the same ID we're going to use with the chrome-extension:// operator. Now that we obtained the ID, we need a file/web-resource to request that sits inside the extension to see if it's available to interact with. I found out that every Chrome extension has a file called manifest.json within its root directory. This file contains information like the version of the installed Chrome extension, some file paths, and more. Here's how the manifest JSON file looks like when you render it inside the browser. This example is taken from a popular Chrome extension, Google Translate. The request to render this page looks like this: If you have Diffeo installed, and you load that URL from within your Chrome browser, it loads the manifest.json file along with all the attributes defined within it. This includes the extension's version. Now the question becomes: If we were to collect enough Chrome extension unique IDs, going by plugin popularity, would we be able to request the manifest.json of each installed extension and find out, at the bare minimum, what version it is? I will then install all extensions and see if I can programmatically detect each one after flipping them on. Well, it works - kind of. Here's what it looked like from my console: All extensions denied the request except one. From the one extension that didn't deny my request for manifest.json, I was able to retrieve and query the entire JSON object stored. Note the many variables defined from within. They include the extension description, the version number, and different operators to use, such as the blob URI scheme. Interestingly enough also included: the web resources (*.html, *.js, *.css, et al.) used by the extension itself. So why did all the other extensions deny my client-side request? It turns out, the folks at Google know about this method used for tracking, and they have limited the scope in which clients can communicate with extensions. In the extension that allowed the request, what stands out is the content of the JSON variable "web_accessible_resources". Let's see how it differs: Web-accessible resources are in charge of limiting which file resources the client is allowed to access from within the browser. This extension happened to define this interaction by using "/*" as one of the variables. The problem with this particular definition is that the wildcard "/*" includes every resource within the extension, including manifest.json (which is located at /manifest.json). So you can see why it responded to my query. We now know that thanks to Google's foresight, we can't just request any web resource from an extension. What we can do is request whatever is available to us. Once we go into each extension and visit manifest.json, we can read the web-accessible resources attribute. That way, we can find web resources and request them to see if they're available. Here I've modified the request sent to each extension. It contains a web resource accessible by the client. If the web resource allows manifest, then I use fetch to retrieve it for processing and finding out what the version number is. If it doesn't allow manifest, then I request something else that it does allow. When making the request, if the status returned is 200, then the extension is installed, and it exists within the browser environment. If it returns a 404, then the extension is either off or does not exist. What if it returns neither? In that case, Chrome is messing with our request, possibly leaving it in a continuous state in which we cannot determine whether or not the user has installed it. As you can see, it works like a charm. My script was able to detect all my installed extensions by side-loading available resources. Since everything seemed to be working as intended, I exported the script to an external server. When I tested the same script on an external server, I got mixed test results. Per extension that was turned on, I would receive a blank result (as if the array index didn't exist). Per the extension that was turned off, I would receive an error (neither status 404 or 200). This was odd, given that when I loaded the script locally, it worked perfectly fine. Then I realized what could be happening; it is possible that Chrome handles specific requests differently based on if they're loaded using file:/// versus https:// or http://. This could pose an issue. Still, if I find that errors are handled in a consistent manner when the client makes an external request from a script loaded remotely, I should be able to determine if the extension exists. This approach would require processing errors as "extension not existing" and blank responses as "extension existing" using try-and-catch methods, and then transforming the findings to their expected result (received error == "extension does not exist", no error == "extension does exist"). For Chrome in particular, Google supports direct calls from/to extensions - as long as the extension ID is present. Google outlines how the API "includes support for exchanging messages between an extension and its content scripts or between extensions." How to make such requests is explained here. var myPort=chrome.extension.connect('Extension_ID', Object_to_send); So what's the takeaway? Extensions can be powerful tools, yet many are developed without giving much thought to security and privacy. In some cases, extensions allow access to their resources in any context, and this can pose a risk to individual users and, through them, to the whole network. This post outlines only one method of detecting extensions. There are other methods for achieving the same results. Maybe you are fed up with ad networks and marketers that are violating your privacy, using the browser and its extensions as their gateway to your data? Or perhaps you're in IT and worried about local browsers and their extensions putting your organization at constant risk of attacks and de-anonymization? Then you want to minimize attribution based on extension fingerprinting. This measure is especially vital for users conducting sensitive OSINT research and investigations online. On the web, appearing unique compared to all in the eyes of adversaries who know where to look, puts a bull's eye on your back. To prevent attribution and de-anonymization, beware - and drop - those chatty browser extensions. And if such considerations are essential in your line of work (as a cybersecurity threat hunter, for example, or as a fraud investigator) - consider switching to a secure OSINT research platform with managed attribution altogether.
OPCFW_CODE
[00:09] <OvenWerks> Eickmeyer: I am also having trouble with 1.9.19 in 21.10 [00:13] <OvenWerks> Eickmeyer: autojack seems to hang because I get no errors. I just stops after creating it's own jack client [00:17] <OvenWerks> hmm, autojack does not show the jack client as being actually created successfully, but the jack log does [00:18] <OvenWerks> Carla crash when starting the "audio engine" [00:23] <OvenWerks> OK, lets reboot [00:29] <OvenWerks> Eickmeyer: ok, I first stopped jack in controls. Then rebooted. (this made sure autojack did nothing with jack after reboot) [00:29] <OvenWerks> Then I started jack with jack_control start [00:32] <OvenWerks> Then I used jack_lsp to list jack's ports... $ jack_lsp [00:32] <OvenWerks> Segmentation fault (core dumped) [00:34] <OvenWerks> Ok so look at the jack log: everything looks normal till: New client 'PulseAudio JACK Sink' with PID 1764 [00:34] <OvenWerks> Thu Jul 22 17:27:29 2021: Client 'PulseAudio JACK Sink' with PID 1764 is out [00:36] <OvenWerks> Then pulse tries again... and starts spewing xruns, jack then fails to find pulse's port [00:37] <OvenWerks> finally pulse gives up... [00:39] <OvenWerks> OK, so I will reboot again. and remove the jackdbus detect module before starting jackdbus next time. [00:44] <OvenWerks> Eickmeyer: jackd 1.9.19 as built in 21.10 is thge problem. [00:45] <OvenWerks> another fresh reboot, pactl unload-module module-jackdbus-detect, jack_control start and pulse jack sink does not try to load. run jack_lsp and it crashes. [00:46] <OvenWerks> Eickmeyer: note also that the python jacklib also fails when creating a client. [00:56] <OvenWerks> Eickmeyer: my guess would be that one of the deps is a different version... does 1.1.17 use all the same deps? That would leave libdb as the only extra thing. What is the difference in versions for that lib from 20.04 to 20.10? [01:00] <OvenWerks> but that package is the same version all the way through. [01:06] <OvenWerks> Eickmeyer: using jackd instead of jackdbus also fails [01:07] <OvenWerks> Eickmeyer: that was incorrectjackd does does not fail, but jack clients also fail on jackd [01:08] <OvenWerks> 1.1.17 was working fine [01:18] <OvenWerks> Eickmeyer: https://bugs.launchpad.net/ubuntu/+source/jackd2/+bug/1937325 [01:19] <OvenWerks> Eickmeyer: you may wish to hit the "this bug affects me" link. [02:29] <Eickmeyer> OvenWerks: If you check the maintainer field on that, you'll find it's me, so the only thing we can do is report upstream to falktx. [02:30] <Eickmeyer> Also, it's 1.9.19 not 1.1.19. [02:35] <OvenWerks> oops [02:36] <OvenWerks> probably is was the 1 in 19 I saw. [02:41] <Eickmeyer> Probably. [02:41] <Eickmeyer> Either way... https://github.com/jackaudio/jack2/issues/776 [02:41] <Eickmeyer> Just filed this. [02:43] <Eickmeyer> OvenWerks: But yeah, I did the upload of 1.9.19, which makes me automatically the maintainer, which means the buck stops here for that bug. Hence, filed upstream. [02:44] <OvenWerks> so back to 1.9.17? [02:45] <Eickmeyer> That can't be done without some versioning gymnastics and a bunch of egg on my face, so I hope falktx might be able to fix it. [02:45] <OvenWerks> did he change the access to memlock etc? [02:46] <Eickmeyer> Not 100% sure, especially since it is only problematic against kernel 5.11 and not kernel 5.8. [02:46] <Eickmeyer> Might even be a dbus difference. [02:48] <Eickmeyer> If it doesn't get fixed before feature freeze, we'll downgrade it back to 1.9.17 with said versioning gymnastics. [02:51] <Eickmeyer> OvenWerks: Also, I'm out in Idaho today through Sunday due to a death in the family. [02:53] <OvenWerks> no worries, take care. I will keep my release schedule for next monday, I think. [02:54] <Eickmeyer> OvenWerks: Ok, sounds good. Shoot me an email if you need anything because I probably won't respond to IRC.
UBUNTU_IRC
On the accumulation of technical debt Working in a team that consistently ignores and accumulates tech-debt is like living in a share-house where no-one ever cleans up. What is tech-debt? “It is not the business of the botanist to eradicate the weeds. Enough for him if he can tell us just how fast they grow.” — C. Northcote Parkinson in Parkinson’s Law When developing software, especially in a commercial setting, programmers, and more often those who seek to manage programmers, will insist on cutting corners rather than doing a job properly. The time-savings get chalked up to what has become known as “technical debt”. The term itself was first coined by Ward Cunningham, inventor of the wiki, as an analogy to fiscal debt; the short-term benefits are like a loan that must be repaid with interest in the form of additional work developers need to do later to work around the issues created. Because brevity matters, the term “technical debt” quickly became ‘tech-debt.’ Tech-debt is what economists refer to as an externality; a cost that is not factored into the price of a good or service. Why do projects accrue tech-debt? “Technical debt usually comes from short-term optimisations of time without regard to the long-term effects of the change.” — Aaron Erickson in Don’t “Enron” Your Software Project Not all debt is bad as anyone who has ever bought a house would know. Borrowed money can be used to purchase a high-cost capital item that will quickly begin to generate revenues. So too with tech-debt. It’s a truism in software development, especially for startups, that ‘shipped is better than perfect.’¹ There will always be times when it makes commercial sense to cut some corners, whether that means skipping some unit tests, or opting for a quick brute-force solution you know has scalability issues that will bite you when your project has real users. Part of the allure of accruing tech-debt, especially for non-programmers making the decision to mortgage their business’ future, is that those accruing the debt are rarely the ones who have to repay it. Individuals choose to accrue tech-debt but it’s the business that pays the interest. How much tech-debt is too much? “Most people know they have technical debt in their code bases, but if you ask them how much, they struggle to quantify it. If you ask many developers where the technical debt exists, they’ll point to the part of the code base they dislike working with the most; but if you ask them what the financial impact is, while they may wax lyrical about the evils of technical debt and how it slows development, they don’t have a quantifiable answer.” — Glenn Bowering in Mapping and Costing Technical Debt Parkinson’s Law of Triviality humourously posits that members of an organisation give disproportionate weight to trivial issues. This ‘law’ is often cited by those keen on accruing tech-debt who tend to deprioritise cleaning up of code. Notions that ‘tests just slow down development’, or that the NP issues within a brute-force algorithm are a ‘nice problem to have’ because they only appear when the business has ‘too many’ customers, can seem attractive; especially when the person in favour of the short-cut is not the one who will have to repay the debt. “For a manager, a code base high in technical debt means that feature delivery slows to a crawl, which creates a lot of frustration and awkward moments in conversation about business capability. For a developer, this frustration is even more acute. Nobody likes working with a significant handicap and being unproductive day after day, and that is exactly what this sort of codebase means for developers.” — Erik Dietrich in The Human Cost of Tech Debt When your project has accrued so much tech-debt that it’s embarrassing, your increasingly unhappy developers will start to leave. Your most experienced developers will be the first ones out the door as they seek new opportunities and projects that are unencumbered. This kicks off a death-spiral for any project. Anecdotally I have observed a correlation between programmer absenteeism and high levels of tech-debt. High levels of tech-debt comprise a barrier to entry for new hires who have to learn how to code around poor legacy decisions. This makes new programmers unhappy. Given the importance of software to the working of the modern world, it’s arguable that paying down tech-debt imposes a massive burden on the economy as a whole. How much interest does the business really pay on the tech-debt you choose to accrue? It’s been alleged that up to 80%² of software development budgets are spent on software maintenance. The willingness to accrue tech-debt is the single largest contributor to this. As Martin Fowler explains, “We can choose to continue paying the interest, or we can pay down the principal by refactoring the quick and dirty design into the better design. Although it costs to pay down the principal, we gain by reduced interest payments in the future.” It’s impossible to estimate the interest rate that will be applied to the tech-debt you accrue. “If the short-term optimisations that occur most of the time had a minor effect on maintenance (something like a 6% mortgage, say), such decisions to take on technical debt would be just fine. However, we frequently allow software development organizations to take out technical loans that, if they were transparent, would throw the corporate treasury department into a spiral of panic.” — Aaron Erickson in Don’t “Enron” Your Software Project Incurring debt when you don’t know what the repayments will be is akin to being a gambler borrowing from a loan-shark. Would you willingly take on fiscal debt at an unknowable interest rate? Stephen Freeman argues persuasively in Bad code isn’t Technical Debt, it’s an unhedged Call Option that even if it is more expensive to do things properly, doing so reduces risk. As any business person ought to know, risk is always measured in dollars. All debt carries risks but you can mitigate them by keeping your tech-debt levels low so you have breathing room to take on debt when it’s really useful to do so. By keeping your debt periods short you reduce the risk that the interest will blow out. For green-fields projects there are plenty of ways to minimise the need to accrue tech-debt. Many of these can also be retroactively introduced into a debt-laden project as well but accruing tech-debt quickly becomes an issue of developer culture. Like gambling addicts who will keep on borrowing despite being in over their heads, teams used to accruing tech-debt will just keep on skipping writing tests, avoid maintaining documentation, and dancing around the need to refactor. And also like problem gamblers they will become expert in rationalising their decisions, and explaining away problems as being trivial, or non-issues. Once bad behaviour becomes entrenched it’s hard to change, but not impossible. Record and estimate all tech-debt decisions Intentionally accumulating tech-debt and failing to keep track of it is a sure road to disaster. Unrecorded decisions get forgotten very quickly. Debts don’t go away just because you’ve forgotten about them. It’s vital you ensure your tech-debt decisions go into your project backlog and get estimated along with all your other user-stories and tasks. Be sure to associate a real business benefit to your tech-debt tickets so your non-technical team members understand why they are important. Have standardised contributing rules Every project must have a README file that outlines what the project does, and how to build, run, test, and deploy it, and that documents any specific knowledge needed by a developer coming fresh to the project. All projects ought to also have a standard CONTRIBUTING file that outlines how to contribute new code to the project. This file needs to explain that you follow the forked git-flow process (for example), how to name branches, commits, and pull-requests, and what the definition of ‘done’ actually is. Adherence to this document must be enforced. Pull-requests and peer-reviews Programmers must be forbidden from pushing code directly to master branches and must always contribute code via a pull-request. Pull-requests must always be reviewed by peers before being merged. Peers must be encouraged to be critical of each other’s code, to nit-pick, and to call out tech-debt when they see it. Programmers must have a culture of taking such criticism on board without taking it personally, and must have the habit of not merging until their pull-requests are approved by a peer. All decent programming languages have linting tools. Use them. Agree up front on the coding standards you are going to use, and then ensure those standards are adhered to. Ideally your tests cover at least 90% of your code. There is a law of depreciating returns on code-coverage so it’s sometimes not worth shooting for 100% coverage, but if your code-coverage is below 90% you will certainly be introducing problems. Tests must be regarded as the source-of-truth for the definition of the project, and not as an afterthought. Clean up as you cook It’s best to clean up and cook at the same time rather than accumulating a massive pile of dirty dishes. Coding is the same. Try to clean up any tech-debt before you commit your changes, rather than accumulating a massive pile of dirty dishes that quickly become fly-blown and infested with maggots. Be ‘The Richest Developer In Babylon’ In 1926 George Samuel Clason wrote a book called The Richest Man In Babylon. In it he outlines, by way of parables, a very simple strategy for on-going financial security. The lesson of the book is simply to ensure you save 10% of everything you earn, and avoid going into debt (unless in doing so you can use the money you borrow to generate more money than the interest you pay). Clason also recommends spending 20% of everything you earn in paying down debt, and setting aside a further 10% for ongoing investments. The lessons are applicable to paying down tech-debt: Devote 10% of your development budget to planning, 10% to on-going developer education, and 20% to addressing your tech-debt. Invest the time to regularly refactor your code. Doing this properly is only possible if you have decent test coverage. Doing it badly just accrues more tech-debt. - Small amounts of tech-debt can serve genuine business needs but add medium-term and long-term risks to a project as the debt-pile grows. The risks inherent in taking on tech-debt are impossible to quantify in advance. - Managing tech-debt is not actually that hard if you get the culture right to start with. However, if you have a culture of sweeping tech-debt under the rug and ignoring it, you are inviting trouble. - No-one wants to live in a share-house where people don’t clean up after themselves. After they’ve been bitten by rats, and found maggots thriving under the carpet, most reasonable people will just move out. ¹ However it’s also true that ‘tested is better than shipped’. Tests are the ultimate source-of-truth for your project’s specification. ² 80% of statistics that cite the number 80% are just made up. ‘80% of’ is a common cypher for ‘many’. Likewise ‘20% of’ is a cypher for ‘some’, as only 20% of stats that cite the number 20% are likely to have been backed up by actual research.
OPCFW_CODE
import { getFrequencyFromNote } from './tuner' export function squareOscillator () { function createSquareOscillator () { const audioContext = new(window.AudioContext || window.webkitAudioContext)() const oscillator = audioContext.createOscillator() oscillator.type = 'sine' oscillator.gain = 0.5 oscillator.connect(audioContext.destination) return oscillator } function createCustomOscillator () { const audioContext = new(window.AudioContext || window.webkitAudioContext)() const oscillator = audioContext.createOscillator() const real = [ 0, 1, 0.8144329896907216, 0.20618556701030927, 0.020618556701030927, ]; const imag = real.map(() => 0) const wave = audioContext.createPeriodicWave(Float32Array.from(real), Float32Array.from(imag), { disableNormalization: true }) oscillator.setPeriodicWave(wave) oscillator.connect(audioContext.destination) return oscillator } let bpm = 60 const getNoteDuration = (size) => (1000 * 60 / bpm) * (size || 1) function playNote ({ frequency, note, octave, size }) { const oscillator = createCustomOscillator() const noteFrequency = typeof frequency === "undefined" ? getFrequencyFromNote(note, octave) : frequency oscillator.frequency.value = noteFrequency oscillator.start() setTimeout(function () { oscillator.stop() }, getNoteDuration(size)) } function playChord(notes) { notes.forEach(note => { playNote(note); }) } function playMelody (noteList) { let wait = 0 noteList.forEach(function (note) { const duration = getNoteDuration(note.size) setTimeout(function () { playNote(note) }, wait) wait = wait + duration }) } function setBPM (value) { bpm = value } return { playNote, playChord, playMelody, setBPM } }
STACK_EDU
Novel–The Cursed Prince–The Cursed Prince Love Again: Flash Marriage With My Arrogant Sweetheart Chapter 353 – Mrs. Adler’s Request wound reason “To be truthful, I worry for my entire life,” Emmelyn mentioned haltingly. “So, when you abruptly discovered that anything undesirable transpires with me, I am pleading you to make sure you consider Harlow to your residence. At the very least until Mars profits from Wintermere.” my wangfei is a man chapter 1 She viewed Harlow who has been getting to sleep soundly in her arms. She looked so small, and frail, but after she was provided, now she not any longer appeared as pitiful as just before. “Why would you state that?” Lily furrowed her brows. “You are doing very well. I, way too, thought I would pass away once i gave birth to Louis. The pain sensation was very painful and also it had taken forever for me personally to force him out. I even cursed my better half and swore we will have never another newborn. Look where I am just now? Ha. 3 boys and girls and counting.” Nonetheless, probably right after the fourth or 5th.. issues grew to become too challenging for her and she was far too worn out to give each little one focus on their own. She also idea she would never want another child. Gah.. it was actually not worth the cost, she thought. Now, unexpectedly, Emmelyn spoke approximately the same thing. All of the suffering and pain she obtained gone through to create Harlow for this entire world were actually worthwhile. And she in fact thanked her perverted hubby for starting sex numerous times they can could get pregnant and from now on got this gorgeous newborn girl. She wouldn’t head possessing even more young children after Harlow was large enough and they could write about their appreciate and consideration with another little one. Athos was concerned that this only good reason Emmelyn was spared was she was with child with Mars’ boy or girl. If Emmelyn possessed provided birth for the toddler, she would practically shed her guarantee. The emperor might get her execution. Oh, how quickly she transformed her imagination immediately after she could check out her adorable baby in her arms. She mentioned she has been deeply losing out on her home in Wintermere. Since Emmelyn’s course might take a detour to Wintermere, Mrs. Adler would love to come with her on the journey. She would return to her house and guide Emmelyn obtain a ship to cross the ocean to Atlantea. She stated she have been deeply lacking her household in Wintermere. Due to the fact Emmelyn’s road might take a detour to Wintermere, Mrs. Adler would love to accompany her over the experience. She would resume her household and help Emmelyn get yourself a cruise ship to go across the water to Atlantea. Athos was concerned the fact that only explanation Emmelyn was spared was that she was currently pregnant with Mars’ baby. If Emmelyn experienced provided start towards the newborn, she would practically reduce her equity. The king might obtain her performance. Nonetheless, right up until the very next day, the earlier witch didn’t frequently transform her brain. So, at last, Emmelyn agreed upon. They might consider the old wagon and go together with each other. She also imagined she would not are looking for another newborn. Gah.. it absolutely was not worth the effort, she thinking. Athos was worried that this only reason Emmelyn was spared was that she was expecting with Mars’ youngster. If Emmelyn acquired given delivery for the toddler, she would practically lose her guarantee. The california king might get her rendering. She could truly feel more sympathy toward her biological mum after she became a mom themselves. Emmelyn thought the initial few childbirths needs to be lovely for that late queen of Wintermere. Emmelyn nodded. She was happy Lily questioned that question. Ever since Emmelyn acquired thought to phony her passing away, she imagined she might at the same time fault Ellena because of it. She possessed missing her complete family members, got kept in the enemy’s kingdom, had also been charged with murder, and would soon experience delivery – if she didn’t try to evade immediately- and everybody she beloved might have poor chance and probably expire. Oh yeah, how quickly she improved her intellect as soon as she could check out her adorable boy or girl in her forearms. She reported she have been deeply losing out on her household in Wintermere. Due to the fact Emmelyn’s direction could take a detour to Wintermere, Mrs. Adler would love to go along with her about the quest. She would return to her your home and guide Emmelyn get a ship to go across the ocean to Atlantea. “Are you presently… suspecting them of one thing? Do you reckon they would wish to take a step for your requirements?” Lily required Emmelyn in a very whisper. One Hundred And Eight Maidens Of Destiny “Didn’t you say you are too ancient and ill to take on a real very long journey your home?” Emmelyn asked the earlier witch repeatedly, to be sure she didn’t misunderstand what Mrs. Adler wished for. “Obviously. I would like to go along with you, you will end up a great help, however am anxious about your state of health as well.” the jungle books penguin classics She didn’t prefer to continue her words. It had been so terrible to even envision how she would ‘die’ by leaving Harlowe alone during the funds. Lily’s soft fun had been able to brighten the atmosphere and Emmelyn laughed as well. She remembered her own problem. Lily was proper. She felt like she is in heck for upwards of 20 time she experienced like cursing and shouting, accusing her partner for the labor ache. “Precisely what are you speaking about?” she required Emmelyn in the hushed strengthen. “Be sure to don’t think of bad items. You are going to entice awful what you should your health for those who do that.” Lily’s soft fun been able to lighten the climate and Emmelyn laughed as well. She remembered her own scenario. Lily was appropriate. She experienced like she is in heck for over 20 hours that she felt like cursing and yelling, blaming her man for that labour suffering. When she read about Mrs. Adler’s prefer to consist of her, Emmelyn believed terrible. She attempted to politely deny her. However, the witch was obstinate. Emmelyn nodded. She was glad Lily inquired that concern. Considering that Emmelyn acquired decided to bogus her loss, she considered she might too pin the blame on Ellena for it. We’ll determine if the california king and the prince will be after Ellena for ‘causing Emmelyn’s death’. Emmelyn vowed to always become a adoring mum to her small children in the foreseeable future. She would stop being like her mother. She reported she was deeply losing out on her property in Wintermere. Considering the fact that Emmelyn’s route might take a detour to Wintermere, Mrs. Adler would love to compliment her around the journey. She would go back to her household and assist Emmelyn obtain a deliver to go across the water to Atlantea. Mrs. Adler mentioned she could obtain a well used wagon belonged to her neighbor for ten silver coins. And they also can use the wagon to depart Draec. Novel–The Cursed Prince–The Cursed Prince
OPCFW_CODE
[development] Drupal - to use objects or arrays drupal at f2s.com Wed Jan 10 08:58:08 UTC 2007 > A single entity is an object. (Eg, nodes, users, etc.) A list > of things is > an array. A list of entities is an array of objects. If only that were true. I'll give you an example of where it isn't true. I once created a "directory of people" for a client. I created a node type to hold the records so tax could be used. I created a form builder function and a db update function to handle it. All's well. Then the client says "some of the people in the directory are actual site users, would be nice if those users could be joined to their record so they can edit their own entry". Made sense, so I used a tab in "my account" to allow for "per user" editing and I initially tried to reuse my existing form builder and db update functions. Guess what? When the form is submitted on "edit node" the update function gets an object, when it's submitted via drupal_get_form() it's an array. Solution, I just cast the array to object in the db update function. But it demonstrates where Drupal is not consistent in it's use of data type containers. Is it an array or an object? Who knows unless you test for it. > where it really doesn't make a difference, using > db_fetch_object() seems more > common than db_fetch_array(), simply because there's fewer funky > involved with objects than arrays. :-) Yes, I see this. fetch_object is slower than fetch_array. Not by much granted, but given we're always looking at performance issues and the speed of Drupal as a whole I'm left wondering why developers are left to use less funky characters over performance. Imho, use an object when you need the functionality of an object, otherwise use arrays. Since Drupal has chosen (at this time) not to use classes I'm surprised the -> operator exists at all anywhere, but it's everywhere! I've discussed this on IRC before (changing objects to arrays) but it's probably a mega change too far, at this time at least ;) As Rowan says: "I'd be happy to see objects removed completely from Drupal..." And Larry, sorry, I'm not looking to open a can of worms here (that's why I said mega change to far) but I just want to point out that if a page makes 100's of SQL queries and they are all served by db_fetch_object() over db_fetch_array() then you are effectively burdening the performance of your application in favour of "having a nice semantic sense" for the developer. If performance wasn't an issue I would give two hoots or care less. But performance IS an issue so it's a valid "can of worms". "personally find that it is easier to say $foo->$bar['baz'] rather than $foo['bar']['bar]'." Exactly what I'm saying. To allow for ease of writing you're adding cpu cycles (not to mention confusion as noted at the beginning of my email, "is it an array or an object? Dunno, must check it, more cpu cycles). Remember, generally you write it once, computers run it millions and millions (if not billions) of times. For whose benefit should it be written, the developers tastes because it looks nice or easier to write or the target hardware where it's actually going to run? Mind you, having said all that, those extra cycles pale into insignificance when you count the number of fstat() calls ;) You can probably tell I come from a coding environment where every cycle counts (machine vision). </rant> More information about the development
OPCFW_CODE
In order to keep our WordPress platform upto date with best practices and compliance requirements for HTTPS we will be making some minor improvements as of the week of March 19th; specifically we will start the process formally removing support for TLS1.0 & 1.1. on our WordPress Hosting. Normally such updates would go unnoticed, however the removal of TLS1.0 & 1.1 will mean visitors on very old devices will be unable to access HRRPS enabled websites going forward. This will mean a small percentage of visitors may struggle to connect to your site. These include: - Visitors using IE 10 or earlier on Windows Vista or Earlier - IE 10 on early versions of Windows Phones - Visitors using Safari 6 or earlier on OSX Snow Leopard or earlier - Android phone users running 4.3 or earlier - Java browsers version 7 or earlier The good news is this equates to a very small percentage of visitors to an average WordPress website; from a recent sample we took across a set of hosted sites we estimate the affected traffic to equate to just under 2% of visitors. Due to the fact a portion of this traffic is in turn automated bots, we believe the real percentage of affected traffic to be lower still. For those affected, they will receive a could not connect SSL type error. The precise wording will change depending on the device, and the simple advice is the end user needs to urgently upgrade. Much of the web will begin to end this support through 2018. Why are we doing this? Simply put, to keep your data safe and encrypted in a way that is modern and compliant. TLS1.0 and 1.1 are older, deprecated protocols. These protocols have known security vulnerabilities, which while hard to exploit, do have the theoretical potential to allow HTTPS traffic to be decrypted. They have since been replaced with TLS1.2 & 1.3. All modern browsers now support TLS1.2 and more and more supporting TLS1.3. Due to the potential risks, the PCI-DSS compliance mandates that all sites that are PCI compliant must drop support for TLS1.0 by June 2018 with TLS1.1 being dropped shortly after. This means that any major website on the Internet will be following similar suit, if they haven’t indeed made the move already. Longer term, major browsers will stop accessing sites which still have TLS1.0 enabled at all. All of our WordPress Hosting is PCI compliant out of the box and therefore is updated inline with compliance. What do I need to do? Nothing at all. This will happen automatically and you needn’t take any action. This post is a simple advisory to let you know what will be happening behind the scenes. Once complete, sites may see some visitors unable to connect, but again the biggest reduction will be in the amount of automated scripts that connect, the vast majority of which are not wanted. Bots such as GoogleBot which indexes your site for Google still will be able to connect as normal. Due to the nature of this update it will not be possible to enable TLS1.0 or 1.1 for sites and containers once its been disabled and all containers will be changed without exception.
OPCFW_CODE
Ribbon Communications is a company with two decades of leadership in real-time communications. Built on world class technology and intellectual property, the company delivers intelligent, secure, embedded real-time communications for today’s world. The company transforms fixed, mobile and enterprise networks from legacy environments to secure IP and cloud-based architectures, enabling highly productive communications for consumers and businesses. With 64 locations in 27 countries around the globe, the company’s innovative, market-leading portfolio empowers service providers and enterprises with rapid service creation in a fully virtualized environment. The company’s Kandy Communications Platform as a Service (CPaaS) delivers a comprehensive set of advanced embedded communications capabilities that enables this transformation. To learn more, visit ribboncommunications.com. The Oracle Senior Developer will be responsible for the technical design, development, and code review, of the product feature enhancements for the Ribbon Oracle and Other technology platforms. This person will be actively involved in the design sessions with business analysts, the product owner, and other technical team members, and will be expected to bring technology and people leadership skills to the team. This role will actively lead the design, build solution components, develop unit tests, guide testing strategy and play an active role in ensuring quality through the development lifecycle. - Work closely with Oracle/Salesforce/Middleware and other technology platform leaders to understand integration requirements and build solutions to meet business requirements. - SME level-consultation with project team and end users to identify application run-time environments. - Demonstrated ability to accurately establish the length and difficulty of tasks and projects, set reasonable objectives and goals, anticipate and adjust for problems/roadblocks and measure results against goals. - Develop complex technical objects in accordance with the technical development standards and best practices. - Mitigate significant risks associated with projects, which have a high technical complexity and/or involve significant challenges to the business. - Create and resolve Service Requests in collaboration with Oracle Support. - Provide generalized production technical support. - Design, develop, deploy, test and maintain technical objects that exist in Oracle applications production environment, e.g., reports, interfaces, conversions and custom applications extension - Create process flows, high level functional and detailed technical design specifications from business requirements. - Directly accountable for the conversion of product requirements into an architecture and design that will become the blueprint for the solution being created. - Work as part of an agile product development team responsible for the timely execution of assigned tasks, and leverage experience to promote process improvement. - Lead conceptual and technical solution design for assigned features improvements to meet business requirements while ensuring compliance with established architectural principles, standards, and processes. - Monitor and approve configuration, solution architecture, development and quality assurance aspects of the features. - Stay up-to-date on latest technical offerings and explore integration tools as applicable
OPCFW_CODE
- New configuration model for PicketLink applications - The Federation concept (Circle of Trust) - Management capabilities - Metrics and Statistics - PicketLink Domain Model - How to Use - Using the CLI command line tool - Deploying applications using the traditional configuration The PicketLink AS7 Subsystem is an AS7 extension that provides an infrastructure to deploy and manage PicketLink applications using JBoss AS 7. It defines a domain model that can be manipulated using a client interface like the Management Console or CLI interface and also makes easier to configure any application as an Identity Provider and Service Provider. |More information on the PicketLink AS7 Subsystem can be found on this thread.| By providing a domain model, all the configuration is external from applications where there is no need to add or change configuration files inside the application being deployed. The subsystem is responsible for during deployment time properly configure the applications being deployed, according with the configurations defined in the domain model: - The configurations in picketlink.xml are automatically created. No need to have this file inside your deployment. - The PicketLink Authenticators (Apache Tomcat Valves) for Identity Providers and Service Providers are automatically registered. No need to have a jboss-web.xml file inside your deployment. - The PicketLink dependencies are automatically configured. No need to have a jboss-deployment-structure.xml inside your deployment defining the org.picketlink module as a dependency. - The Security Domain is automatically configured using the configurations defined in the domain model. No need to have a jboss-web.xml file inside your deployment. The table bellow sumarizes the main differences between the traditional configuration and the subsystem configuration for PicketLink applications: |Configuration||Traditional Config||Subsystem Config| ||Required||Not required. If present it will be considered instead of the configurations defined in the domain model.| |WEB-INF/jboss-web.xml||Required||Not required. The PicketLink Authenticators (Tomcat Valves) and the Security Domain is read from the domain model.| |META-INF/jboss-deployment-structure.xml||Required||Not required. When the PicketLink Extension/Subsystem is enabled, the dependency to the org.picketlink module is automatically configured.| When using the PicketLink subsystem to configure and deploy your identity providers and service providers, all of them are grouped in a Federation. A Federation can be understood as a Circle of Trust (CoT) from which applications share common configurations (certificates, saml specific configurations, etc) and where each participating domain is trusted to accurately document the processes used to identify a user, the type of authentication system used, and any policies associated with the resulting authentication credentials. Each federation has one Identity Provider and many Service Providers. You do not need to specify for each SP the IDP that it trusts, because this is defined by the federation. One of the benefits about using the PicketLink subsystem to deploy your applications is that they can be managed in different ways: - PicketLink Console The console provides a UI, based on the AS7 Administration Console, to help manage your PicketLink deployments. Basically, all the configuration defined in the domain model can be managed using the console. - JBoss AS7 CLI Interface (Native Interface) The CLI interface provides a command line tool from where you can query and change all the configuration defined for your applications. - JBoss AS7 HTTP Interface JBoss AS7 allows you to manage your running installations using the HTTP protocol with a JSON encoded protocol and a de-typed RPC style API. Metrics and statistics can be collected from applications deployed using the PicketLink subsystem. This means you can get some useful information about how your Identity Providers and Service providers are working. - How many SAML assertions were issued by your identity provider ? - How many times your identity provider respond to service providers ? - How many SAML assertions were expired ? - How many authentications are done by your identity provider ? - How many errors happened ? Trusted Domain errors, signing errors, etc. The PicketLink Domain Model is an abstraction for all PicketLink configuration, providing a single schema from which all configurations can be defined for Identity Providers or Service Providers, for example. The example bellow shows how the domain model can used to configure an Identity Provider and a Service Provider. If you are looking for more examples about how to use the domain model, take a look at https://github.com/picketlink/picketlink-as-subsystem/blob/master/src/test/resources/picketlink-subsystem.xml. If you are familiar with the PicketLink configuration you will find out that the domain model schema is just an abstraction to make the configuration even easier. The configuration schema can be found here. - Download JBoss Application Server 7.1.1.Final+. - Download PicketLink Subsystem libraries. - Download PicketLink 2.1.2.Final+. First, download and install a JBoss AS 7 distribution. Update your JBoss AS7 distribution with the latest PicketLink libraries. Follow the instructions in our JBoss Modules section. Copy the PicketLink Subsystem library to [jboss.server.home.dir]/modules/org/picketlink/main. Change the module definition ([jboss.server.home.dir]/modules/org/picketlink/main/module.xml) with the contents bellow: Make sure the resources section is pointing to the correct libraries and their file names. Change your standalone.xml to add the PicketLink Extension: Take a look at the documentation for the PicketLink Quickstarts. Download the example web applications. Extract the file and copy the idp.war and sales.war to [jboss.home.dir]/standalone/deployments. Open both files (idp.war and sales.war) and remove the following configuration files: |Don't forget to configure the security domains for both applications. Check the PicketLink Quickstarts documentation for more information.| Open the standalone.xml and add the following configuration for the PicketLink subsystem: To make sure that everything is ok, please start the JBoss AS and try to access the sales application. You should be redirected to the idp application. If you want to login at the sales and idp applications, don't forget to configure the security domain for both. As said before, you can use the CLI command line tool to query the PicketLink subsystem configuration. Execute the [jboss.server.home.dir]/bin/jboss-cli.sh script and connect to your running JBoss Application Server instance. Now that you are connected query the federation resource to see its child resources: You can always use the traditional configuration (with all files inside your deployment) if you want to. To do that, you can: 1) Remove the PicketLink subsystem configuration from the JBoss AS7 configuration (standalone.xml) and add to your deployments the missing files: WEB-INF/picketlink.xml, WEB-INF/jboss-web.xml and META-INF/jboss-deployment-structure.xml. 2) Use the PicketLink subsystem configuration to automatically configure the dependencies and security domain, but consider the configurations defined in the WEB-INF/picketlink.xml file inside your deployment instead of the subsystem configuration for IDPs or SPs. When using the subsystem, is recommended that you use only the domain model configuration to your IDPs and SPs. It is easier to configure and less intrusive than the traditional approach.
OPCFW_CODE
Microsoft Build is a flagship event that primarily caters to the developer community, not unlike Google’s I/O and Apple’s WWDC. This event typically focuses on announcing new software and service features and hosting in-depth sessions for developers and other professionals utilizing Microsoft tools. In 2023, the spotlight of Build was on two significant buzzwords – A and I (Artificial Intelligence). Microsoft shared updates on its product enhancements through AI and its existing AI tools, most notably ChatGPT and Copilot, highlighting its commitment to broadening AI applications across its services. The event, inaugurated by Microsoft’s CEO Satya Nadella, witnessed several significant revelations. Here’s a rundown of the top announcements from Microsoft Build 2023: AI Copilot Integration with Windows 11 Microsoft is set to integrate its AI assistant, Copilot, into Windows 11. This is the same assistant currently employed across Microsoft Edge, Office apps, and GitHub. The Windows Copilot will be accessible via the taskbar. Upon clicking, it will launch the Copilot sidebar, where users can request assistance with text rewrites, explanations, and more in active apps. The feature is slated for public testing next month before being gradually rolled out to a broader user base. Open Access to Amazon App Store for All Android Developers Microsoft has declared the Amazon App Store open to all Android developers, inviting anyone interested in introducing their apps to Windows. Developers will have to submit their apps for testing in order to distribute them to Windows 11 devices. This development signifies a significant advancement for the Windows Subsystem for Android (WSA) as it showcases its stable functionality for developers. Expanded Plugin Support for Microsoft 365 Copilot The Microsoft AI assistant, 365 Copilot, is now compatible with three primary plugins: Teams messages extensions, Power Platform connectors, and tools incorporating technology from ChatGPT. Users will also have the flexibility to select from various third-party plugins. Microsoft has assured that all its Copilot and Bing Chat plugins will be built to the same standards that OpenAI applies for ChatGPT. Microsoft plans to incorporate 365 Copilot into Edge. Residing within the browser’s sidebar, the tool can leverage site content to aid users in working on projects in Microsoft 365 apps, including but not limited to Outlook, Word, and Excel. This could involve drafting emails, inputting data into spreadsheets, and generating status updates based on chat threads. AI Upgrade for Windows Terminal Windows Terminal is in line for an AI upgrade via GitHub Copilot integration, providing developers access to an AI-powered chatbot within the Terminal. The chatbot can be used to execute various tasks, offer code suggestions, and explain errors. Microsoft has also hinted at the possibility of integrating GitHub Copilot into other developer tools, such as WinDBG. Microsoft has designated Bing as the default search engine within its ChatGPT chatbot. The integration is currently rolling out to ChatGPT Plus subscribers, while free users can access a similar function through a plugin that brings Bing to ChatGPT. Introduction of New Plugins to Bing Chat Several new plugins are set to be introduced to Bing Chat, including those from Expedia, Instacart, Kayak, Klarna, Redfin, TripAdvisor, and Zillow. Developers can create plugins using a single platform that will be compatible with ChatGPT, Bing, Dynamics 365 Copilot, Microsoft 365 Copilot, and Windows Copilot. A Significant Boost to Windows 11 on ARM Unity Player is scheduled for general availability in early June for native Windows on Arm. This means that developers using the middleware engine can target Windows on Arm devices for native performance on existing and future titles. In addition, Visual Studio 17.6 will include Multi-platform App UI support. These advancements aim to enhance the Windows 11 on ARM experience by drawing more developers to support the platform.
OPCFW_CODE
Outline 1 fermat’s little theorem 2 primality testing 3 solovay-strassen algorithm 4 miller-rabin algorithm 5 aks algorithm manindra agrawal (iit kanpur) flt based. 1 introduction primality testing is an important algorithmic problem we will prove the theorem in a series of steps, beginning with: lemma 4. Let's get started the basis for the aks primality test is the following generalization of fermat's little theorem to polynomials. Primality testing with gaussian periods thus the time to test nfor primality using theorem aks is o~(q3=2 as with theorem aks,. Primality testing a journey from fermat to aks miller-rabin, solovay-strassen and the aks tests for primality testing theorem : for a prime p, and. Generalization of fermat's little theorem important result let a 2z n 2n n 2 (a n ) = sumit sidana, phd cse primality estingt : aks algorithm idea. Primes is in p manindra agrawal neeraj kayal algorithm for primality testing theorem 41 the algorithm above returns prime if and only if n is prime. Their test is called aks test, the proof is simply done by fermat's little theorem as used in fermat primality test, together with \(\binom primality testing. Foolproof primality test it is often used to motivate expositions of the aks primality test, what's the right statement of the theorem. A potentially fast primality test tsz-wo sze theorem 11 (aks) lenstra and pomerance showed that the aks primality test can be. It is clear that aks primality proving is the modifications to speed up the aks primality proving teaching prime number theorem in a complex analysis class. A note on the storage requirement for aks primality testing algorithm zhengjun cao abstract we remark that aks primality testing algorithm needs about 1,000,000,000 g. Rabin-miller primality test theorem 02 if n is composite, this gives rise to a probabalistic algorithm for testing primality rabin-miller algorithm. That the aks algorithm is, at present, still too slow to replace its pre- accurate way to test large numbers for primality 2 fermat’s little theorem & test. The aks primality test (also known as agrawal-kayal-saxena primality test and cyclotomic aks test) is a deterministic primality-proving algorithm created and. An introduction to the aks primality test primality test, it is shown in theorem 247 of that the degree. How can i test for primality implementing fermat's theorem should i thought prime numbers and primality testing was useful and the aks algorithm sounds. Primality testing with gaussian periods hit a predetermined target das in theorem 2, and as is needed in the primality algorithm presented in theorem 1. Primality tests based on fermat’s little theorem manindra agrawal test, and aks test. Primality proving via one round in ecpp and one iteration in aks qi chengy abstract ing theorem in section 5,. The aks primality test ilse haim may 2, 2013 introduction to primality testing goal: given an integer n 1, (aks) theorem: n is prime iff. Fool-proof test for primes - numberphile numberphile the aks test has been a major break-through in the search gödel's incompleteness theorem. Get youtube without the ads working no thanks 3-months free find out why close primality (2 of 2: aks test) eddie fermat little theorem. Improving the speed and accuracy of the miller-rabin primality test shyam narayanan although it is signi cantly faster than the aks primality test, it. Aks is also unique because all primality testing algorithm which were we know from the fermat’s little theorem that if n is a prime, then [math]. Detailed tutorial on primality tests to improve your this testing is based on fermat's little theorem there are other methods too like aks primality. A simple, but very inefficient primality test uses wilson's theorem, which states that p is prime if and only if: the aks primality test runs in Õ. Some remarks on primality proving and elliptic curves the \aks deterministic primality test of of essentially all of primality testing theorem 1. While applying fermat's theorem for primality testing, a number has to fail the test just once to be ruled out as composite - with confidence carmichael numbers. Since the aks primality test is logically equivalent to aks vs fermat primality tests nov aks is not logically equivalent to fermat's little theorem.
OPCFW_CODE
November 30, 2005 Does this look funny to you? I installed Movable Type 3.2 over the weekend. When I came back to the office on Tuesday, I noticed that my main page displays incorrectly on my PC here. Though my stylesheet did not change, it appears that the main body of the blog is wider now, which means there's not enough room for the sidebar. As such, all the sidebar contents, such as the archive and blogroll links, show up at the bottom of the page instead. This doesn't happen at home on my XP machine, where I've tried it in both Firefox and IE. It's only here on my Win2K box with IE6 that it looks this way. So, I'd like to know: Does this page look any different to you today than it did last week? If so, please leave a comment or drop me an email, and tell me what OS and browser you're using. Thanks! Posted by Charles Kuffner on November 30, 2005 to Administrivia Yes, the body is wider, and so your sidebar moves to the bottom. I am using IE version 6. Oh, and I'm also getting strange ascii characters where I assume you had bullets in your lists of early voting hours and locations. I also notice this on my work machine, which happens to be a Win2000 Professional / IE6 setup. On my home machines (WinXP), it looks normal. Firefox 1.5 on Win2k looks OK, but I get the same funkiness you describe when I look in IE 6 (same OS). Of course, I only use IE when I absolutely have to - does your office mandate a browser? Or do you have to use multiple browsers in your line of work? Well, from Safari on Mac OS X 10.3.9, hours for early voting are figured in euros. It's not a new problem. I remember sending you mail quite a while (year or so). Same symptoms tho - worked with everything except IE6 on Win2K. Thanks everyone for the feedback. I've deleted the bullet points from the Early Voting post - I'd just copied from the Chron story, and for whatever the reason, it got mangled in the rendering. That's also happening to me now with quotation marks and dashes from copied text. Very annoying. I don't have control over my browser here. I remember your complaint now, Charles M, but until now my blog had always displayed correctly for me with IE6 and Win2K. Strange. Looks like it's off to the MT forums I go to see what others have tried to fix this sort of thing. Sigh... I only saw the effect when I had the browser window too narrow. It was almost if IE6 decided the text and sidebars had to be a certain size - if the window was too narrow, the sidebar went to the bottom. NS7 just puts in a horizontal scroll. Incidentally, I just tried it with IE6 and XP. It would appear to be an IE6 bug. for whatever the reason, it got mangled in the rendering Your page has a meta content set to a windows code page that is likely the cause of your problem. meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" Try meta http-equiv="Content-Type" content="text/html; charset=utf-8"
OPCFW_CODE
Ada: named access types and memory pools I have read the following from wikibooks: A pool access type handles accesses to objects which were created on some specific heap (or storage pool as it is called in Ada). A pointer of these types cannot point to a stack or library level (static) object or an object in a different storage pool. Therefore, conversion between pool access types is illegal. (Unchecked_Conversion may be used, but note that deallocation via an access object with a storage pool different from the one it was allocated with is erroneous.) According the bold text, if the named access types belongs to the same memory pool then the conversion is legal? I'm implementing a composite pattern, and I think I could improve the design if the composites return references to its concrete leaves and composites, avoiding the use of keyword "all" in the named access definition. I think I need memory pools to accomplish with it, but I think that it is an advanced feature and I haven't found enough documentation to be sure I can implement one by myself correctly. I have been following the links shared in this post. Does anybody know other resources after ten years? Why not a container, such as one of the Multiway_Trees? because my elements are limited Would this apply? @SimonWright wow, I've never thought I could extend an access type! (type Holder is new T_Access). Thank you, I think that can work properly, I'll update the question when I can try it :) @Albatros23 Technically speaking it ain't an extension of the access type. Just the derivation of a new access type using T_Access as a base. (consider 'type New_integer is new Integer') But there is more: doing it this way you also pull all the primitive operations to the place in which the new type is defined. It is not the same type as the original although it is exactly the same as the original: difference is in the name. @darkestkhan thank you for correcting me. What you mention about the primitive operations according the Simon's example is only possible if you derive the type publicly, righ? @SimonWright I'm a bit stupid, I think I can achieve what I intended with anonymous access types. If a client wants to get a concrete leaf or composite from the composite where they are contained, returning an anonymous access to the concrete leaf/composite would be enough. That way the client won't be able to free them, and the client will be able to modify them when needed Hm, given the description, it seems like subpools would work well for this: have a subpooled-pool "for the composite type", where each component has a subpool. But first, if there is any confusion as to what storage-pools are, or how to use them in Ada, it is an excellent exercise to implement one, and in the case of subpools its interface is given in System.Storage_Pools.Subpools. package Example is Type Kpool is new System.Storage_Pools.Subpools.Root_Storage_Pool_With_Subpools with null record; overriding procedure Allocate (Pool : in out Kpool; Storage_Address : out System.Address; Size_In_Storage_Elements : System.Storage_Elements.Storage_Count; Alignment : System.Storage_Elements.Storage_Count) is null; overriding procedure Allocate_From_Subpool (Pool : in out Kpool; Storage_Address : out System.Address; Size_In_Storage_Elements : System.Storage_Elements.Storage_Count; Alignment : System.Storage_Elements.Storage_Count; Subpool : not null System.Storage_Pools.Subpools.Subpool_Handle) is null; overriding function Create_Subpool (Pool : in out Kpool) return not null System.Storage_Pools.Subpools.Subpool_Handle is (raise Program_Error); overriding procedure Deallocate (Pool : in out Kpool; Storage_Address : System.Address; Size_In_Storage_Elements : System.Storage_Elements.Storage_Count; Alignment : System.Storage_Elements.Storage_Count) is null; overriding procedure Deallocate_Subpool (Pool : in out Kpool; Subpool : in out System.Storage_Pools.Subpools.Subpool_Handle) is null; End Example; ---------- -- BODY -- ---------- Package Body Example is -- Put implementations here. End Example; You can use Ada.Text_IO.Put_Line to "see" the data being passed, and from there figure out what you need to do with your data. (A good exercise is to make the pool hold an array of Storage_Element via discriminated record, and from there do the memory-management using those bytes.)
STACK_EXCHANGE
<?php namespace web\auth\unittest; use lang\IllegalStateException; trait PrivateKey { /** * Creates a new 2048 bits RSA private key. * * @return OpenSSLAsymmetricKey * @throws lang.IllegalStateException */ public function newPrivateKey() { $options= ['private_key_bits' => 2048, 'private_key_type' => OPENSSL_KEYTYPE_RSA]; // On Windows, search common locations for openssl.cnf *including* // the sample config bundled with the PHP release in `extras/ssl` if (0 === strncasecmp(PHP_OS, 'WIN', 3)) { $locations= [ getenv('OPENSSL_CONF') ?: getenv('SSLEAY_CONF'), 'C:\\Program Files\\Common Files\\SSL\\openssl.cnf', 'C:\\Program Files (x86)\\Common Files\\SSL\\openssl.cnf', dirname(PHP_BINARY).'\\extras\\ssl\\openssl.cnf' ]; foreach ($locations as $location) { if (!file_exists($location)) continue; $options['config']= $location; break; } } if (!($key= openssl_pkey_new($options))) { throw new IllegalStateException('Cannot generate private key: '.openssl_error_string()); } return $key; } }
STACK_EDU
Re: BOOTMGR is Missing - Linux Boot Possible - From: Paul <nospam@xxxxxxxxxx> - Date: Wed, 15 Jun 2011 04:41:42 -0400 Sorry guys I was given some duff info yesterday! I now have the PC (Packard Bell) available and on start up the actual error messages are: Verifying DMI Pool Data ................ BOOTMGR is missing Press Ctrl-Alt-Del to restart Has a Win XP logo on front. So the question is pretty much the same: Should I be able to access the data from Ubuntu CD boot? Also, I have used the terminal "force" commands to gain access to a laptop drive previously. But I know very little about Linux software and possible corruption of data. Can using these force commands to enable disk access actually cause data corruption? The PC I'm looking at has valuable family pics on apparently and I certainly don't want to risk rendering the disk unrecoverable. I'm not familiar with these "force" commands... If Linux can detect a valid file system, it's going to put an entry in /etc/fstab so it can be mounted. It will only mount, if you click in a file manager, on the disk icon. Linux will check the partition type in the MBR, but it is also going to do at least a basic check that the metadata is correct for the file system. If the metadata is bad, the mount step will fail (presumably read-only, so Linux won't attempt to overwrite If you successfully mount the partition, and you decide to drag and drop files, *then* you are taking a chance. Because now you're modifying the file system. But if all you're doing is looking, far less damage should result. Linux lacks any form of CHKDSK, so it can't repair damage as such. One utility has the ability to set the "dirty" bit, so the next time Windows is running, Windows can use CHKDSK to repair the file system. But that isn't the same thing as Linux doing the repairs itself. There is one commercial Linux utility ($99+) that claims to know how to repair NTFS. Linux developers know enough to be able to write such a repair utility, but it hasn't happened yet. "BOOTMGR is missing" by itself, doesn't portend a total collapse. It could be triggered by just one missing thing. By the way, that error implies someone has installed or attempted to install, Windows 7 or Vista. You may be seeing a WinXP logo on the computer case, but that doesn't imply there has not been some creative updating by the owner. Maybe they tried to install the beta of Windows 7 or something. The word "BOOTMGR" didn't get there on its own. (If you saw NTLDR, then it might be Win2K or WinXP.) As a repair person, this is a laundry list you can use. Sure, people work with less, but this is intended to take the maximum care, so you can look the computer owner in the face, and say you did your very best. 1) Determine the size of the hard drive in the computer. 2) Have on hand, at least two empty disks the same size or larger than the drive you're working on. One disk will hold an "image" sector by sector, of the sick disk drive. The other spare disk, is for saving scavenged files, if it comes to that. (If you cannot repair, you scavenge files from the sick disk, to the second spare disk.) 3) Make a backup of the sick disk to the first spare. This is an example of a basic Linux command using disk dump. dd if=/dev/hda of=/dev/hdb Obviously, there is more to learn than that, about using "disk dump", but that is an example of making a sector by sector copy. If you later break /dev/hda somehow, you can copy hdb back to hda at some point in the future. Typical performance of that (non-optimal) example is 13MB/sec. In my example, I'm assuming hdb is the same size or bigger than hda. In Linux, you need root to make the copy, so if you were using a Ubuntu disk, you might do it as sudo dd if=/dev/hda of=/dev/hdb To give another example, this is how I back up my WinXP partition, before doing crazy experiments. This uses a Windows version of dd and Windows syntax. Using block size and count parameters, speeds up the transfer by a factor of three. Note that "bs" is a multiple of 512 bytes. bs*count = raw_size_of_partition. You need to get the raw size info, then factor the number, to come up with "good" values for bs and count. I try to keep bs below 512KB in size. Another nice aspect about using bs and count, is it transfers a precise amount of info, unlike the other command which just stops when you hit the end of one of the two disks. You can see, this requires a bit dd if=\\?\Device\Harddisk0\Partition2 of=winxp.dd bs=129024 count=604031 If I had to back up my whole 250GB disk, it looks like this. This is Linux. 193536*1292056 = 250GB approx. The Linux "factor" command can help you factor the reported full size of the disk. sudo dd if=/dev/hda of=/dev/hdb bs=193536 count=1292056 4) You also have the option, of slaving the disk to another Windows PC and working on it. That is handy, if you need CHKDSK, for example. But I would make my disk to disk copy first, before any CHKDSK runs. *Any* utility that makes "in-place" changes (changing forever, the one copy you've got), runs the risk of making things worse. So before doing anything to the sick disk, you make a copy first. CHKDSK has been known to ruin a disk. That would typically happen if the disk was corrupted by a half connected disk cable, and the disk was chock full of errors. CHKDSK would fix non-existent things, and have a merry old time for itself. That is known as "error multiplication", just ruining the disk forever. And this is why you make a backup, before doing CHKDSK. If you're careful with their data, you can do as much experimenting as is necessary to fix it. As long as you have a properly made backup, there is nothing to worry about. The spare disks you use, should be known good. If the spare disks show bad SMART data, as shown by HDTune or a similar utility, then find better disk(s) before starting your work. You can keep your backup, after the computer is returned to the owner. If, after a week, there are no more phone calls, you can delete it or use Secure Erase or whatever you want. The erasure should be known to cover all the sectors you used for your backup. If the computer is damaged in the process of shipping it back to the owner, it pays to have the backup just in case. If the owner, with a straight face, can tell you that a backup already exists, then you can erase the damn thing as soon as you're done. But if the owner is a careless person, you'll need to hold onto the backup for a few days. Now that you've safely made a backup, we can look at some other options. If your Linux work shows some fully functional partitions, and you can traverse the file system and see all the usual files in there, you can consider using a repair disc. Normally, a responsible owner, would burn the repair disc provided my Microsoft. When I got my Win 7 laptop, the laptop prompts you to burn a repair disc. This is not an installer DVD, it is a boot CD, about 200MB or so perhaps. That disc is sufficient to get access to an MSDOS command prompt, so you can issue commands. There are also automated repair options. The automated repair options work, if when booting the disc, the disc detects a valid partition. If I boot the Windows 7 repair CD on my WinXP machine, the WinXP partition won't appear in the menu of things to repair. Yet, if I want, I can instead use the command prompt in there, and use programs like "bootsect" to put back a WinXP MBR. Now, your suspicion is, the owner installed Windows 7 or Vista. That means, they have the original DVD. That DVD can also be used as a repair disc. You used to be able to download the 200MB repair disc from here, but both the Windows 7 and Vista versions have been removed. That means, you're going to have to hit up the owner for some disc. Either the installer DVD, or a repair CD, could be used to fix the BOOTMGR is missing. This is an example of a repair. If you've made your copy of the disk drive, you can give this a shot. Half an hour to back up the hard drive, and ten minutes fiddling with this. 1) Boot from Windows 7 DVD. 2) First screen chooses language/keyboard etc. 3) “Repair your computer" is the next thing to select. 4) Menu pops up with a list of valid partitions to repair. Select the correct one. 5) Next is "Recovery Options" "Startup Repair" Notice there is a command prompt option there, if all else fails. Then you're in a MSDOS like environment. 6) An automated sequence will try to repair it. Obviously, if something key has been removed (like say the 100MB boot partition that is sometimes used), it's going to fail and tell you so. Installs come two ways. They can consist of boot_partition + main_partition or can have just the main_partition. My laptop has the former option. There is some way to force an install to not use the 100MB boot partition method, so having it all in C: is also an option you can run into. - BOOTMGR is Missing - Linux Boot Possible - From: TheScullster - BOOTMGR is Missing - Linux Boot Possible
OPCFW_CODE
Does the X-UA Compatible http header actually work for IE9? I'm working on a web product that may be hosted as an intranet site. I'm trying to find a programmatic way to keep IE9 from slipping into IE9 Compatibility View browser mode, even though 'Display Intranet sites in Compatibility View' may be on. I'm testing with this html page: <!DOCTYPE HTML> <html> <head> <meta http-equiv="X-UA-Compatible" content="IE=9" /> <title>Company</title> </head> <body> </body> </html> I've put this in IIS config: <system.webServer> <httpProtocol> <customHeaders> <clear /> <add name="X-UA-Compatible" value="IE=edge" /> </customHeaders> </httpProtocol> as recommended here: https://stackoverflow.com/a/5887546, and checked the response headers in IE9 and see: X-UA-Compatible IE=Edge But the page still puts the browser in Compatibility View browser mode. The console shows: HTML1202: http://intranet-site/test.html is running in Compatibility View because 'Display intranet sites in Compatibility View' is checked. test.html There's a similar question here: https://stackoverflow.com/a/3726605/1279516, in which a comment by Jacob on the chosen answer suggests that in IE9, there's nothing you can do to override the 'Display Intranet sites in Compatibility View' setting. However, his comment is the only place I've found mention of this. Can anyone confirm or deny that assertion? And is there anything else I can try? I shouldn't have to tell all clients who deploy our product to uncheck the 'Display intranet sites in Compatibility View' browser setting for all their users. Which document mode do you see in the IE development tools for your site? The document mode is actually IE9 standards. That ensures the rendering is done in standards mode, but not that all the javascript features are available, correct? No, actually you should check only document mode. Browser model is initial parameter which determines how document mode should be calculated (by default). So, if you have correct document mode then everything should work as expected. The feature I thought I was missing was .querySelector() - only available when document mode is IE8 standards or higher. A brief test added to the above html confirmed that .querySelector is available even when browser mode is IE9 Compatibility View (IE8 Compatibility View in IE8). Unless someone says otherwise, oryol's comment suggests that all features are made available based on Document Mode, so there's no need to try to control browser mode further once you are getting the document mode you want. I ran into a similar issue - page not rendering correctly in IE8/9 because Display Intranet sites in Compatibility View was on. Asking all users to disable this or asking for a group policy and adding exceptions for I do not know how many other intranet pages requiring this was not an option. The document mode is OK thanks to the X-UA-Compatible: IE=edge. But I still had the layout issues. Cause: Compatibility mode causes IE to send another user agent to the server. For IE8,9 this is that of IE7. Some of my ASP.NET libraries does some checks based on that and renders IE7 specific HTML/CSS that results in layout issues. Solution: Modify the UserAgent string on the server to "IE9" if it says "IE7". Of course this is a dirty solution since in theory there could be a real IE7 client. In my cause (Intranet) I know my users only have IE >= 8. Changing the UserAgent proved harder than anticipated. I successfully applied this idea - deriving a subclass from HttpWorkerRequest and intercepting requests for UserAgent. This indirectly overloads request.UserAgent and helped solve the problem.
STACK_EXCHANGE
In this article, we will learn what is a test script, how to write a good test script and the following. What is a Test Script? Test scripts also known as Automated Test Scripts are line-by-line instructions or short programs that test various functionality automatically in an application or system under test. Some of the languages used in automatic testing are Java, Python, VB Script, Perl, Ruby. Test scripts are written to verify that the application under test meets its design requirements and functions correctly. The terms test script and test case can be used interchangebly. Let’s see what exactly is a test case. What is a Test Case? A test case is a set of instructions that must be followed in order to test an application or system under test. Difference Between Test Case And Test Script Here are the key differences between Test Case and Test Script: |Test Case||Test Script| |Test Case is an approach to software testing that require manual intervention.||Test Script is an approach to software testing that does not require manual intervention.| |A test case is a set of specific instructions detailing how to test the functionality of software applications or products.||A test script is a set of instructions, or a short program, used to test the functionality of software applications or products.| |Test Cases are executed manually.||Test Scripting is exeucted automatically.| |Test cases are written in the form of templates.||Test scripts are written in the form of a scripting.| |When testing an application manually, we use test cases.||When testing an application using automation tools, we use test scripts.| |Test case template includes test case ID, test data, test steps, actual result and expected result.||Test script contains various types of commands depends on the language we choose.| |Test cases are classified as positive, negative, UI test cases etc.,||Test scripts are in general automated test scripts.| |It requires more human resources and time to execute.||It requires less human resources and time to execute.| |This is a setup that testers use to examine any particular function of the software application.||This is a program that testers develop to test any particular function of the software application.| What is a Test Script Template? A test script template is a document which outlines the steps for performing various tests on an application or system. The template can include instructions for setting up test data, executing specific tests, and verifying the results. It also includes information about expected outcomes of each step in the testing process. This helps to ensure that all necessary tests are performed correctly and that all expected results are achieved. Test script templates can also provide guidance on how to troubleshoot any errors which may occur during testing and help to identify potential problems in the application before releasing it. The template is an important tool for ensuring efficient, accurate software testing and can shorten the overall time spent on development cycles. By using a test script template, developers can ensure that their application is thoroughly tested and meets all the requirements before being released. How To Write A Good Test Script? There are three ways to create a test script, which are as follows:one of three ways. In this Record/playback method, there is no need for testers to write any code. They just record and playback the user actions. The tester will need to code in order to fix automation issues or adjust the current behavior. Some of the tools support this record and replay method are Selenium IDE, Katalon IDE etc., The QA team doesn’t need any programming expertise, which is perfect for organizations that prefer to do “Shift-Left” Testing. #2. Scripting (Keyword/Data-Driven): Keyword-driven testing (aka table-driven or action word based testing) is a type of automation testing framework. With keyword-driven testing, we define keywords or action words for each function in a table format, usually a spreadsheet. By doing this, it allows us to execute the desired functions. In this method, testers create the tests by using keywords that don’t require in-depth knowledge of the code. Later on they implement the test script code for the keywords and regularly update this code to include new keywords as needed. #3. Writing Code Using the Programming Language When writing code for test scripts, it is important to understand the application’s logic and structure. This will make your code easier to read, understand and debug. Furthermore, you should be aware of any existing bugs or problems that may arise when running the script. It is also important to keep in mind best practices in coding which can help prevent errors and optimize the performance of your test scripts. In addition, you should also consider writing helper functions or libraries that can be used to DRY (don’t repeat yourself) in order to make the code more maintainable and modular. This will help reduce development time as well as keep the code stable. Finally, you should be aware of any version control systems that your team may use and ensure that you document the code so other members of the team can understand it effortlessly. By following these tips and techniques, you can write efficient test scripts in any programming language. Example of a Test script For example, your test script may include the following to check a website’s login feature. - You need to specify how your automation tool locates the login link, username filed, password filed, and login button on your website. Let’s suppose, by their CSS element IDs. - Go to your websites homepage and click on “login” link. Verify that the login screen is visible, as well as the “Username”, “Password” fields, and login button. - Pass the login credentials and click on login button. - You need to specify how a user can locate the title of the Welcome screen that shows after login- say, by its CSS element ID. - Make sure the Welcome screen’s title is visible. - Read the title of the welcome screen. - If the title text matches the expectations, the test was successful. Tips To Create A Test Script If you’re wondering how to go about writing a test script, here are some key tips: Clear: We need to have test scripts that are clear and easy to understand. If the test scripts are clear and easy to understand then there is no need to go behind the project lead everytime for clarifications. It saves lot of time and resources. In order to achieve a well-functioning website, you need to constantly verify the clarity and conciseness of each step in the test script. Simple: We need to create test scripts that contain only one specific action for testers to take. In order to achieve this, you need to ensure individual functions are being tested correctly. Well-thought-out: When creating a test script, always think from the user’s perspective to determine which paths need testing. It is important to be flexible and versatile to consider all of the possible pathways that users might take while using a system or application. What are the benefits of using a test script approach? Using a test script approach has several key benefits. Firstly, it provides an organized and structured approach to testing, reducing the chances of overlooking any important tests. It also helps to ensure that all necessary tests are carried out in order and mitigates the risk of inconsistent or incomplete results. Additionally, using well-structured test scripts can help to improve the efficiency of testing. By having clear instructions, testers can quickly and accurately complete tests without spending time interpreting the requirements or making mistakes. This makes it easier to identify bugs quickly and helps to save time in finding the root cause of issues. Overall, a test script approach is highly beneficial for providing comprehensive coverage of an application and ensuring that all tests are correctly executed. This can help to reduce the risk of releasing software which contains bugs, providing a much better user experience for end-users. This helps to ensure that applications are as error free as possible before they are released. Furthermore, test scripts provide clear documentation which makes it easier to debug any issues which may arise. This can save a great deal of time in finding and fixing bugs. Using test scripts helps to ensure that applications are thoroughly tested and releases are bug-free, providing a better user experience and saving time in the long run.
OPCFW_CODE
Infinite loop on getNearestOverflowAncestor causing a RangeError: Maximum call stack size exceeded Hello! I'm using this library and it is working great. I had a few tests in place using Jest, and they were all working fine too using Node 14. Now I've updated to Node 18, and somehow the tests that involve floating-ui started failing 🤔 This is the error I get: Node.js v18.16.0 project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:296 function getParentNode(node) { ^ RangeError: Maximum call stack size exceeded at getParentNode (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:296:25) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:313:24) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) at getNearestOverflowAncestor (project/node_modules/.pnpm/@floating-ui+dom@1.2.7/node_modules/@floating-ui/dom/dist/floating-ui.dom.umd.js:322:12) I've been doing a bit of debugging, and it seems that the getNearestOverflowAncestor function gets into an infinite loop. The parentNode const is being assigned to the same node in every run. Any ideas? What can I try? Not sure why this was not failing on the previous version of Node and it is failing on this one. Thanks. If you log out parentNode, what is it before this branch? https://github.com/floating-ui/floating-ui/blob/b8d1fd5854333b688ddc9ee2a171f6453fcdbcab/packages/dom/src/utils/getNearestOverflowAncestor.ts#L7-L11 This is where the function should stop recursing, so it's likely the node is not what is expected in the new Node/jsdom version or whatever I recorded a quick video while debugging and checking the content of parentNode and I forgot to post it https://user-images.githubusercontent.com/5671420/236202096-2da6a5a1-1c98-4012-a458-e64d0733ef4c.mov Basically it seems like the content of parentNode goes "back and forth" between two values, resulting in an infinite loop. If you need more information please let me know. That must mean isLastTraversableNode(...) is failing, because #document is one of the strings that is checked for, and it should break the loop. What's getNodeName(parentNode)? getNodeName(node) is returning empty string inside the isLastTraversableNode function: yeah that'd do it 🫠 Is that what you were asking for or is there something else I can provide? Thanks for the help :) The empty string is why it's running into an infinite loop. Also, it seems like the node is of type MockHTMLElement My guess is isNode() is returning false then, which is why it's empty export function getNodeName(node: Node | Window): string { return isNode(node) ? (node.nodeName || '').toLowerCase() : ''; } Correct, isNode() is returning false It seems like it gets a bit confuse with all those mock types from Jest 🤔 An alternative method is checking if node.nodeName is of type string instead, assuming that .nodeName property on that node is #document or html. That definitely works. I did something like this just to test: function isStr(value) { return typeof value === 'string' || value instanceof String; } function getNodeName(node) { return isNode(node) || isStr(node.nodeName) ? (node.nodeName || '').toLowerCase() : ''; } And it runs with no issues. Is this something that can be added on the library itself and release in a future version? Thanks. Yeah that's fine if you want to make a PR. The only problem is if someone has window.nodeName = 'body' or such, then things may break. So a better check may be needed... @atomiks does it make sense to change how the library work so that a quirky DOM implementation works? Wouldn't it make more sense to fix the problem on, I guess, jsdom? @FezVrasta kind of agree, I'm not even sure how this is happening considering this lib is used in a lot of projects and no one has made an issue until now. I also can't reproduce it on Node 18 with latest jest and jest-environment-jsdom. Seems like it could be an environment problem on your end @jvlobo, try creating a minimal repro? @jvlobo I'm not against changing the code to adapt to this scenario, as long as it doesn't have any downsides in other cases, like the one I mentioned. Given that it seems to be env-specific, maybe with a custom Jest setup causing an issue, I'm going to close as not reproducible. But you can make a PR if you want anyway to fix it. Thanks @atomiks I forgot to reply to your last comment as I was planning on trying to get a min repro but I haven't had the time this week. I'll report back with any news I have regarding the issue. Thank you. Hello again @atomiks Sorry for the delay but I finally got around creating a minimal repo where you can reproduce the issue: https://github.com/jvlobo/floating-ui-issue It has the minimum of the stack I'm having the issue with: StencilJS (3.0.0), Storybook (7.0.21). To reproduce the issue you just need to install the dependencies and run the test:unit script. It throws the same error I reported at the beginning of the issue: I'm using Node 18.16.0, NPM 9.5.1 and PNPM 8.6.2. The repo basically has only one componente my-component that you can see inside src/components and it has the StencilJS code for the component itself and the my-component.spec.ts file that is running the test. I hope this can help on finding a good solution. If you believe the solution is what we previously talked about I'm happy to do a PR. I haven't done it because it can have other side-effects that I'm not aware of. Thanks a lot :)
GITHUB_ARCHIVE
If your hosting company doesn’t provide an automated WordPress install like HostGator or Bluehost, you’ll need to learn how to install WordPress manually. The step by step guide below will walk you through the process. Installing WordPress manually isn’t difficult, but it is a bit tedius. How to install WordPress manually - Download the latest version of WordPress to your local computer from the WordPress.org download page. - The WordPress download file is compressed and you’ll either need to unzip it on your local machine or on your server. If your server has CPanel or a similar interface, this can generally be done using the File Manager for your server. - Using FTP, upload either the compressed file or decompress files and all sub-directories to your root web folder. This is generally named something like public_html or httpdocs. If compressed, use your servers file manager to to decompress the file. If you’re installed WordPress to a sub-folder like \blog, you’ll need to copy or extract the files to the sub-directory rather than the root. - Once the files have been copied to the server, you’ll need to create the database for WordPress to use. How this is done varies from host to host, but almost all hosting companies have an admin interface that allows you to create a new database. You can name this database whatever you like. I generally use some combination of the name of the blog/website and “WordPress”. For example: “SideIncomeBlogging_WordPress”. - Next you’ll need to create a database user for the WordPress software to use. Again, most hosting companies have an admin interface function to do this. Choose a name for the user, something like” “wordpress_user” and provide a password. I generally use a random password generator for the password. Make sure you write down the database name, user name and password and keep it in a secure location. I keep all of mine in Evernote and encrypt them with a passphrase. - Now we’ll run the WordPress install. First, you’ll need to make your root web-folder writable. To do this, use FTP to change the permissions of your WordPress root folder to 777. Remember the setting prior to changing it as we’ll change the permissions back after the install. - Using your browser, navigate to: http://<<yourdomainname>>/wp-admin/install.php. Replace <<yourdomainname>> with the domain name for your site or blog. Press enter, and you’ll be presented with a page that prompts you to create a configuration file. - Press the Create a Configuration File button. WordPress will create a base configuration file for you, then take you to the main installation page. - On the installation page, press Let’s Go. You’ll be presented with a page asking for details on your WordPress database. - Enter the Database Name, User Name, and Password that you created in step 5. Both the Database Host and Table Prefix can normally be left at the default settings. When complete, press the Done button followed by the Run the install button on the next page. - Now we’ll enter the information about your blog on the Welcome page: - In the Site Title field, enter the name of your website. This is the name, not the domain name. For example, the domain for this site is: www.sideincomeblogging, but the site title is: Side Income Blogging. - For the user-name, you can use the default of admin, but I highly recommend you use a different ID. This will be the primary account for your site. - Next enter the password for the admin account. Do not use something obvious, use something that is easy for your to remember but hard to guess. I recommend using a combination of letters, numbers and symbols. For example, if you want to use: lookout, make your password: 1ook0ou+. Enter your password twice. - Enter your email address. Your email address is used by WordPress for notifications. This includes notifications of comments and other various notifications. - The checkbox at the bottom toggles whether or not your side is open to being indexed by Google and other search engines. I’d suggest leaving it checked. - Click on the button to complete your installation. You should receive a Success! message followed by a Login button. Congrats, you just installed WordPress! You can now login to the administration console. The url for that is: http://<<yourdomainname>>/wp-admin I’ll be discussing how to properly set-up your WordPress install in an upcoming article. Photo by: Eric M Martin
OPCFW_CODE
Batronix Prog Studio 6 Crack prog-studio is a software that should be in every programmer’s toolbox. it is a professional tool that makes programming for micro-controllers fast and fun. with prog-studio programmers can create, debug, explore and program and micro-controller systems efficiently. prog-studio is a professional and easy to use programming tool for micro-controllers. it is a professional tool that makes programming for micro-controllers fast and fun. with prog-studio programmers can create, debug, explore and program and micro-controller systems efficiently. prog-studio includes powerful debugging features. in addition to the usual debugging options, the user has the possibility to put breakpoints on memory addresses. with these breakpoints, you can monitor the memory of the micro-controller and trace back the data flow. prog-studio includes powerful visual debugging features. with the push of a button, the user can simply watch the values of registers and memory locations. the user can also observe the execution of the program and set breakpoints on program instructions, data and memory locations. this allows you to find the required assembler instruction without having to search through several lines of code. all of the bap/ip software components for the mcs51 micro-controller are included in the prog-studio package. here, you can find the bap/ip editor, the simulator for the mcs51 micro-controller and the build system for the c-language source code, the assembler source code and the mcu-tools, which are required for the installation and use of the various components. prog-studio includes the bap/ip editor, which is used for the bap/ip development process. all the object files for the mcs51 micro-controller can be stored in this program as export files. here, you can edit and compile source code, and even edit and debug the assembly code. if you want to program with a variety of assemblies (such as pseudo or real assembly), the prog-studio assembler can be used. it is capable of recognizing and translating any assembler language, including the c language, and the assembler source code. furthermore, it is possible to utilize the prog-studio compiler to translate the c language into assembler code. assembler code has a similar structure to a function, which makes it simple to find the right instruction. the prog-studio compiler assists in the creation of the assembler code by translating the source code into assembler code. the assembler source code is used for the assembly when the assembler is not running. the programmer uses the debugger for the editing process. you can debug the actual instruction in the target device, the debugger will not interfere with the execution of the instruction. as the debugger works with the source code, the debugging process is simplified. the debugger is connected to the debugger unit. the debugger unit is where the debugger connects to the target device. the debugger unit is connected to the target device via a common bus (e.g. i2c). programming of the various chips on the selected board is done via the integrated programmer. i/o is done using the programmer, the crystal oscillator and external oscillator. the oscillator can be switched between internal and external to allow the users to use the board with the oscillator they are most comfortable with. all the necessary i/o ports can be programmed as well as the temperature sensor and three digital i/o ports.
OPCFW_CODE
To be able to send emails, MailStore Server requires SMTP access data. MailStore sends notifications by email if product updates are available or if the automatic creation of a new archive store failed. Furthermore, email copies for the restore from MailStore Web Access can be sent via SMTP. Under Administrative Tools > Miscellaneous > SMTP Settings you can specify the SMTP settings. - Start MailStore Client and log on as MailStore administrator (admin). - Click on Administrative Tools > Miscellaneous and then on SMTP Settings. - Under Server, enter the host name of the SMTP server or its IP address. - By default, MailStore uses port 587. If you want to use a different port, enter port number in the Port field. - In the field Protocol, select SMTP for an unencrypted connection to the SMTP server. For an encrypted connection, select SMTP-TLS or SMTP-SSL. If the certificate provided by the remote host cannot be verified (e.g. self-signed or signed by an unknown certificate authority), enable the option Accept all certificates to allow MailStore to establish a connection. As this option leads to an insecure configuration, warnings may appear in the summary and/or the dashboard. - Especially SMTP servers which are accessible through the internet require a login (SMTP authentication). Check the corresponding checkbox and enter the appropriate access data. In most cases, the POP3 access data of any user on the email server can be used. - Under Sender, enter the Display Name and the Email Address of the email sender. Many SMTP servers require an existing email address to be entered. The display name can be chosen freely; ideally the name indicates that the email was sent by MailStore Server. - Under Recipient for Notifications, enter the email address of the recipient for administrative notifications of MailStore Server. To specify multiple recipients, enter them comma-separated. - Once all settings have been specified, MailStore Server can be instructed to send a test email to the email address entered for notifications; simply click on Apply and Test. If an error message appears or the recipient specified does not receive the email, the following hints for troubleshooting may be helpful. - If no error occurs upon sending but the email does not arrive, please check the spam or junk mail folder of the mailbox. Perhaps the email was filtered out. - If an error message appears because of an invalid certificate ("Server's certificate was rejected by the verifier because of an unknown certificate authority."), check Accept all certificates and try again. - If an error message appears indicating that "One or more recipients rejected", the SMTP server probably requires authentication. Enter the appropriate access data as described above. - If an error message appears because of invalid access data ("Incorrect authentication data"or "Authentication failed"), verify the data entered. In most cases, the access data match those of the corresponding POP3 server. - If further error messages appear or other problems arise, please check your entries for possible mistakes.
OPCFW_CODE
I agree with Reedbeta - although I'm heavy fan of C, C++ and ASM (low level is good) and Haskell (high level also ). Don't drop python to trash - it's not that bad (especially for doing stuff in Gtk for example)... Gtk and standard C (F.e. GNU99 standard) is well... quite a hell. And I mean it, really. On the other hand PyGtk isn't that bad, it's used and works quite good. I'm not sure, but it's probably even more used than Gtk+ in C++. It can't though compare with Gtk2HS - Haskell ftw! :ph34r: (warning. very high ninja skills needed). ... especially Gtk C version vs. GtkHS (where C version is kinda owned in lenght of code ) Every language has it's advantages and disadvatnages (and is better for something) - there is no "uber" solution (well... except for Haskell! :ph34r:). For game development, you can use actually almost any available language - standard C, Pascal (e.g. strictly procedural languages - and do lots of stuff through pointers); C++, Object-Pascal (e.g. object-oriented AND procedural languages - very good thing to work with - you have all the power C has, plus something more); Java, C# (e.g. strictly object-oriented (with lots of extensions) and managed code - also very usable); or even some "ninja languages" like Haskell is. For example - we actually doesn't write our game + engine in single language (not that it would be impossible, but it's easier to do this in one language and to do that in another one). For game engine core we're using C++ (GNU++11 (formerly GNU++0x) standard). For actual realtime ray-tracing core in engine we use native C - C99 std (CPU ray tracing) with SSE intrinsics heavy code and C - OpenCL/C99 std (GPU ray tracing). And this is just a core. We now have game engine editors - e.g. the important stuff, where C# with Gtk (known as Gtk#) is a lot better tool than using C++ (and don't ever use Gtk and C - our first editor was actually writen in Gtk + C - and it WAS nightmare, compared to Gtk#). Our game scripts can be done in LUA (C/C++ like scripting language) and we're also going to try Haskell on this (and that will own! - it's just for test & fun). The actual game is in our case written in C++ (but! it can actually be also written in C# - which we considered several times, but still we like C++ a bit more). And at last - game & game engine configuration files - are written in bash-like syntax. As you can see, we do lots of stuff in C-like languages, but not in single one... why to limit just for a single language, where there is so much of them? NOTE: You don't want to see makefiles :lol: EDIT: So actually we use lower-level languages, where speed is critical ... and higher-level languages where structure and understanding of code is critical (because 'one does not simply read & understand intrinsics code block on the first try').
OPCFW_CODE
Wisej has an extensive session management system. It's probably the most complete server-side state management compared to other web frameworks. Web systems are stateless by nature: each request coming from the browser may be processed by a different thread on the server, and the same thread may be reused by another browser. The only way to support server-side sessions is to generate a session ID and pass it along with every request. Wisej fully supports server-side state management for both HTTP and WebSocket connections. Blazor doesn't support any real state management natively - just hit refresh on a Blazor app and everything you are working on is lost. Angular, React and other client-side scripting libraries don't support sessions. ASP.NET supports sessions with limited control only using HTTP (not supported with WebSocket) and saving a cookie shared among tabs. Don't confuse "user" with "authentication" with "session". A web session is similar to a desktop application instance. You can run a desktop app with or without a user or an authentication. When you start a second instance of the same executable, you get a second "session". The web application doesn't run any executable for each session, there is no UI thread running: each request is independent from the previous and must restore the session associated with the browser making the request. In a web application the browser and the server are on different machines and usually far apart; they can lose and regain connectivity, the user can just leave the browser on and go home, or can turn it off without terminating the application, navigate to another page and back, hit refresh (or F5), etc. The session must survive some of these events and terminate for others. Since there is absolutely no way to detect when a browser is closed, or a device turned off, and even distinguish a simple navigation from closing a tab, there is only one efficient way to manage a server-side lifetime: keep-alive pings and the session timeout. This is the number of seconds that Wisej waits, without any sign that the user is alive, before removing the session from memory. When the session times out, it's disposed, gone, removed, deleted. Cannot be recovered. It's like terminating a desktop application. ASP.NET only provides the Session_End handler in Global.asax; it fires when the session is gone, all you can do is clean up. The sessionTimeout setting in Wisej is the number of seconds, without any sign the user is alive, before Wisej fires the Application.SessionTimeout event. It gives you a chance to react before the session and the user's work is lost. By default, if you don't handle the Application.SessionTimeout event, Wisej shows a built-in countdown window. In addition to the sessionTimeout, Wisej uses a keep-alive ping system. It fires a keep-alive event after a certain timeout when the user is not interacting with the application. Since the user cannot interact with the application if the browser is closed, or the device turned off, or the connectivity is lost, the keep-alive system is useful to detect when the user is "gone". If Wisej doesn't receive any "signal" from the browser it will terminate the session after sessionTimeout * 2. Regardless of the keep-alive system, after sessionTimeout seconds without any user interaction, Wisej fires the Application.SessionTimeout event and, if not handled by the application, it shows the build-in SessionTimeoutForm. Default timeout window. You can show your custom window, or use SessionTimeoutForm as the base for your modified version, or show nothing at all. Simply set e.Handled = true to suppress the built-in window. Application.SessionTimeout += Application_SessionTimeout; private static void Application_SessionTimeout(object sender, System.ComponentModel.HandledEventArgs e) // do something // suppress the built-in timeout window. e.Handled = true; AddHandler Application.SessionTimeout, AddressOf Application_SessionTimeout Public Sub Application_SessionTimeout(sender As Object, e As HandledEventArgs) ' Do something. ' Suppress the built-in timeout window. e.Handled = True Wisej also supports the concept of "Infinite Sessions". In fact there are two kinds of infinite sessions. The most common is to simply suppress the timeout window and, as long as the browser is open and there is connectivity, keep the application alive. See above how to suppress the timeout window. Our recommendation is to set the sessionTimeout to a low number: 1-3 minutes (60-180 seconds) and handle the Application.SessionTimeout to control what to show to the user when their session is about to expire - or simply suppress the expiration notice. Another, less used, approach to support infinite sessions is to bind the session to a browser by saving the Session ID to the local storage, and you can optionally set the sessionTimeout to 0. When the sessionTimeout is set to 0, the session never expires (until the server is turned off). With this approach, if the number of sessions is controlled, a user can close the browser, come back another day and find the same state he or she left and can restart working exactly where they left off. HTML systems like PHP, ASP.NET, JSP store the session ID in a cookie which is then returned to the server in the header of each HTTP request. Cookies can persist after the browser is closed and are always shared among browser tabs (you cannot start more than 1 session with the same browser.) This is the default location where Wisej saves the Session ID. It's wiped out when the browser is closed, it's not sent to the server in the request headers, and each browser tab has its own. If you close the browser and reopen it, your session is gone. You can optionally configure Wisej to store the Session ID in the browser's local storage. In this case, it survives closing the browser or turning off the machine. When you reopen the browser, if your session hasn't expired, you will find your previous state reloaded in the browser automatically and will be able to keep working where you left off. Closing what? You can close the browser tab, close the browser, turn off the device (or throw it out the window), lose power, lose connectivity, forget the computer on for a week, navigate to another web page, or click the Logout or Terminate button - if they are provided by the Wisej application your are using. When the user closes the browser tab (or navigates away) the keep-alive pings stop, after sessionTimeout * 2 Wisej will terminate the session. When the user closes the entire browser (or navigates away) the keep-alive pings stop, after sessionTimeout * 2 Wisej will terminate the session. When the destroys the device (or navigates away) the keep-alive pings stop, after sessionTimeout * 2 Wisej will terminate the session. There is no difference (to the code) between navigating, closing, turning off or burning the device. (Please don't contact support asking how to detect that the browser was closed, it's impossible regardless of what some blogger may write.)
OPCFW_CODE
using System; using System.Collections.Generic; using System.Linq; using System.Text; using System.IO; using NetProcGame.tools; namespace NetProcGame.dmd { /// <summary> /// An ordered collection of Frame objects /// </summary> public class Animation { /// <summary> /// Width of each of the animation frames in dots /// </summary> public int width = 0; /// <summary> /// Height of each of the animation frames in dots /// </summary> public int height = 0; /// <summary> /// Ordered collection of Frame objects /// </summary> public List<Frame> frames; public Animation() { frames = new List<Frame>(); } /// <summary> /// Loads the given file from disk. The native animation format is the 'dmd-format' which /// can be created using the dmdconvert tool. /// </summary> /// <param name="filename"></param> /// <param name="allow_cache"></param> public Animation load(string filename, bool allow_cache = true) { double t0 = Time.GetTime(); // Load the file from disk if (filename.EndsWith(".dmd")) { // Load in from DMD file this.populate_from_dmd_file(filename); } else { // Load from other image formats (TODO) } return this; } /// <summary> /// Saves the animation as a .dmd file in the given filename /// </summary> public void save(string filename) { if (this.width == 0 || this.height == 0) throw new Exception("Width and height must be set on an animation before it can be saved."); this.save_to_dmd_file(filename); } public void populate_from_dmd_file(string filename) { BinaryReader br = new BinaryReader(File.Open(filename, FileMode.Open)); long file_length = br.BaseStream.Length; br.BaseStream.Seek(4, SeekOrigin.Begin); // Skip over the 4 byte DMD header int frame_count = br.ReadInt32(); this.width = (int)br.ReadInt32(); this.height = (int)br.ReadInt32(); if (file_length != 16 + this.width * this.height * frame_count) throw new Exception("File size inconsistent with header information. Old or incompatible file format?"); for (int frame_index = 0; frame_index < frame_count; frame_index++) { byte[] frame = br.ReadBytes((int)(this.width * this.height)); Frame new_frame = new Frame(this.width, this.height); new_frame.set_data(frame); this.frames.Add(new_frame); } } public void save_to_dmd_file(string filename) { BinaryWriter bw = new BinaryWriter(File.Open(filename, FileMode.Create)); bw.Write(0x00646D64); // 4 byte DMD header bw.Write(this.frames.Count); // Frame count bw.Write((int)this.width); // Animation width bw.Write((int)this.height); // Animation height foreach (Frame f in this.frames) bw.Write(f.get_data()); bw.Close(); } } }
STACK_EDU
The Microsoft Partner Technology Solutions Professional Program (P-TSP) is a select group chosen from the Microsoft partner community, whose focus is to augment Microsoft’s internal Technology Specialist team. Their primary role is to communicate the value of Microsoft Solutions to customers and to provide architectural guidance for Enterprise Integration solutions. The program is designed to enable a high performance team of partner-based resources to deliver pre-sale activities and resources to empower customers and help them meet their solution and integration needs. Jeff Ferguson is a Principal Consultant with Magenic. He has been with Magenic since 1996 and has worked in the software development community since 1989. Jeff has developed code for the Microsoft technology stack during all of that time and has been involved in a variety of both desktop and Web-based projects using C, C++, C# and Visual Basic. Jeff has served in architecture, design and development roles for several of Magenic’s clients. He also engages in technology-focused public speaking, notably to user groups and events ranging from the Twin Cities Languages User Group to the Twin Cities Code Camp. In addition to Jeff’s customer-facing role as Principal Consultant, he also serves as a presales technical resource for Microsoft’s sales team through the Partner Technology Solutions Professional (P-TSP) program. Through this program, he is engaged by Microsoft personnel to provide technology overviews, proofs-of-concept, technical demonstrations, and technology assessments for Microsoft’s customers. A Consulting Manager and Principal Consultant for Magenic, as well as a Microsoft P-TSP (BizTalk and App Plat), Andrew Schwarz serves as the technical point person on all of the largest and most strategic pursuits for Magenic’s flagship office. With experience in software development, quality assurance, enterprise application integration and business process management, Andrew is able to bring his broad technical background to bear in support of all of Magenic’s sales team. Andrew holds MCSD certification and two MCTS certifications for Microsoft BizTalk Server. He also secures the Sales and Marketing requirements for all of Magenic’s Microsoft Competencies. In addition to Andrew’s duties as a Consulting Manager at Magenic, Andrew also serves as a presales technical resource for Microsoft’s sales team through the Partner Technology Solutions Professional (P-TSP) program. Through this program, Andrew is engaged by Microsoft personnel to provide technology overviews, proofs-of-concept, technical demonstrations, and technology assessments for Microsoft’s customers. Andrew has delivered dozens of Core Infrastructure Optimization (Core IO), Business Productivity Infrastructure Optimization (BPIO) and Application Platform Optimization (APO) assessments for Microsoft and Magenic’s shared customer base. Brad Friedlander is a Principal Consultant and Solutions Architect for Magenic and a Partner Technology Solutions Professional (P-TSP) for BizTalk. He has over 25 years of experience in architecting, designing, and developing innovative solutions using a broad array of methodologies and technologies—Microsoft and others common used in industry. Brad has a broad and deep experience in systems architecture and systems integration. He has substantial understanding of object oriented and component-based methodologies and design techniques as well as n-tier Client/Server and web-based environments. Brad provides successful leadership to complex technology projects that are focused on delivering maximum business benefit. Brad presents at Code Camps and user group meetings. He also provides leadership to Magenic’s systems integration practice. Daniel Hester is a Principal Consultant for Magenic and a Microsoft P-TSP (Partner Technology Solutions Professional) for BizTalk Server. Daniel has over sixteen years of consulting experience and joined Magenic in 2004. Since 2005 Daniel has focused exclusively on BizTalk Server and architects integration solutions for our customers. Daniel is a frequent presenter at Microsoft BizTalk Server events on the West Coast and a member of Bay.NET, the Bay Area's .NET User Group. Daniel earned his bachelor’s degree in Philosophy from Vassar College and holds MCTS and MCP certifications in BizTalk Server. Stevo Smocilac is an Associate Principal Consultant with Magenic. He has been with Magenic since 2011 and has over 12 years’ experience working in software development; the last 7 of which have been focused on designing, implementing, managing and administrating technical solutions developed using Microsoft SQL Server and the Microsoft Business Intelligence Stack. Stevo has played a key role in transforming the Business Intelligence environments at a number of organizations, helping them better leverage the Microsoft BI toolset or move onto the Microsoft BI Stack from a competing vendor’s platform. He has developed and deployed a number of major BI projects within various business domains, including Sales, Marketing, Healthcare and Product systems. Stevo is a proven team leader and has experience leading geographically distributed development teams, with involvement in all phases of the development lifecycle; from envisioning through operational support. His effective management and database development skills are complemented by extensive administrative experience with SQL Server and Analysis Services, including security, performance tuning, monitoring, troubleshooting, disaster recovery and SQL Server version migrations.. Back to Top ↑
OPCFW_CODE
State-of-the-art NLP technologies such as neural question answering or information retrieval systems have enabled many people to access information efficiently. However, these advances have been made in an English-first way, leaving other languages behind. Large-scale multilingual pre-trained models have achieved significant performance improvements on many multilingual NLP tasks where input text is provided. Yet, on knowledge-intensive tasks that require retrieving knowledge and generating output, we observe limited progress. Moreover, in many languages, existing knowledge sources are critically limited. This workshop addresses challenges for building information access systems in many languages. In particular, we attempt to discuss several core challenges in this field, e.g.: We cover diverse topics of cross-lingual knowledge-intensive NLP tasks such as cross-lingual question answering, information retrieval, fact verification, and information extraction. By grouping those tasks into a cross-lingual information access topic, we encourage the communities to work together towards building a general framework that supports multilingual information access. Bio: Avi Sil is a Principal Research Scientist and a Research Manager in the NLP team at IBM Research AI. He manages the Question Answering team (comprising of research scientists and engineers) that works on industry scale NLP and Deep Learning algorithms. His team's system called `GAAMA' has obtained the top scores in public benchmark datasets e.g. Natural Questions, TyDI and has published several papers on question answering. He is the Chair of the NLP professional community of IBM. Avi is a Senior Program Committee Member and the Area Chair in Question Answering for ACL and is actively involved in the NLP conferences by giving tutorials (ACL 2018, EMNLP 2021), organizing a workshop (ACL 2018) and also the Demo Chair (NAACL 2021, 2022). He was also the track coordinator for the Entity Discovery and Linking track at the Text Analysis Conference (TAC) organized by the National Institute of Standards and Technology (NIST). Abstract: Currently, multilingual information access (MIA) particularly by Question Answering (QA) has a major problem. Most QA research software sits in someone's own Github repository and none of those softwares work easily with each other. This problem is exacerbated in different modalities of QA research: document retrieval, reading comprehension, QA over tables, QA over images, videos etc. We're introducting PrimeQA, a one-stop shop for all Open QA problems. I'll talk about how PrimeQA is remediating the MIA problem by bringing together all these QA software/ solutions as building blocks and creating a one repository where a user/student can come in and quickly replicate the latest/greatest QA paper/publication that’s at the top of a leaderboard (e.g. XOR TyDi) without re-inventing the wheel. Bio: Sebastian Ruder is a research scientist at Google Research working on NLP for under-represented languages and based in Berlin. He was previously a research scientist at DeepMind, London. He completed his Ph.D. in Natural Language Processing at the Insight Research Centre for Data Analytics, while working as a research scientist at Dublin-based text analytics startup AYLIEN. Previously, he studied Computational Linguistics at the University of Heidelberg, Germany and at Trinity College, Dublin. He is interested in cross-lingual learning and transfer learning for NLP and making ML and NLP more accessible. Abstract: Pre-trained multilingual models are strong baselines for multilingual applications but they often lack capacity and underperform when dealing with under-represented languages. In this talk, I will discuss work on building parameter-efficient models, which has recently received increased attention. I will discuss how such methods can be used to specialize models to specific languages, enabling strong performance even on unseen languages. I will demonstrate the benefits of this methodology such as increased robustness (to different hyper-parameter choices as well as to catastrophic forgetting) and efficiency (in terms of time, space, and samples). Finally, I will highlight future directions in this area. Bio: Andre is an incoming PhD student in Computer Science at Princeton University, where he will be advised by Adji Bousso Dieng. His research focus will be on Multitask and Multimodal Learning where he will leverage Natural Language Processing and Machine Learning to solve challenging problems in the Natural Sciences. He is currently a research intern at Meta AI advised by Angela Fan and a pre-doctoral student in Artificial Intelligence at Polytechnic University of Catalonia working with Marta R, Costa-jussa and Carlos Escolano. He completed his Master’s and Bachelor’s degree in Computer Science and Technology at the University of Electronic Science and Technology of China. His research in natural language processing spans several topics, including machine translation, language modeling, text classification and summarization, named-entity recognition, and dataset creation and curation for low-resourced African languages. Abstract: Subword and character-based language models still cannot capture non-concatenative morphological information due to character fusion in morphologically rich languages. In this talk, I will present KinyaBERT, a simple yet effective two-tier BERT architecture that uses the combination of a morphological analyzer and BPE at the input level to explicitly represent morphological compositionality. A robust set of experimental results reveal that KinyaBERT outperforms solid baselines by 2% in F1 score on a named entity recognition task and by 4.3% in average score of a machine-translated GLUE benchmark. KinyaBERT fine-tuning has better convergence and achieves more robust results on multiple tasks even in the presence of translation noise. Bio: Kathleen R. McKeown is the Henry and Gertrude Rothschild Professor of Computer Science at Columbia University and is also the Founding Director of the Data Science Institute at Columbia. She served as the Director from July 2012 - June 2017. She served as Department Chair from 1998-2003 and as Vice Dean for Research for the School of Engineering and Applied Science for two years. McKeown received a Ph.D. in Computer Science from the University of Pennsylvania in 1982 and has been at Columbia since then. Her research interests include text summarization, natural language generation, multi-media explanation, question-answering and multi-lingual applications. In 1985 she received a National Science Foundation Presidential Young Investigator Award, in 1991 she received a National Science Foundation Faculty Award for Women, in 1994 she was selected as a AAAI Fellow, in 2003 she was elected as an ACM Fellow, and in 2012 she was selected as one of the Founding Fellows of the Association for Computational Linguistics. In 2010, she received the Anita Borg Women of Vision Award in Innovation for her work on text summarization. McKeown is also quite active nationally. She has served as President, Vice President, and Secretary-Treasurer of the Association of Computational Linguistics. She has also served as a board member of the Computing Research Association and as secretary of the board. Bio: Graham Neubig is an associate professor at the Language Technologies Institute of Carnegie Mellon University. His research focuses on multilingual natural language processing, natural language interfaces to computers, and machine learning methods for NLP, with the final goal of every person in the world being able to communicate with each-other, and with computers in their own language. He also contributes to making NLP research more accessible through open publishing of research papers, advanced NLP course materials and video lectures, and open-source software, all of which are available on his web site. Abstract: In this talk, I will discuss ongoing work on building a benchmark to measure the progress of natural language processing over every language in the world. I will first describe our methodology, and then do a demo of our current benchmark prototype and make an open call for others to join in its development. Bio: Holger Schwenk received his Master's degree from the University of Karlsruhe in 1992 and his PhD degree from the University Paris 6 in 1996, both in Computer Science. He then did postdoctorate studies at the University of Montreal and at the International Computer Science Institute in Berkeley. He joined academia in 1998 and he was a full professor at the University of Le Mans until 2015. Holger Schwenk joined Meta AI Research in June 2015. His research activities focus on new machine learning algorithms with application to human/machine communication, in particular statistical machine translation and multilingual NLP. Abstract: Multilingual sentence representations are very useful to extend NLP applications to more languages. One particular application is bitext mining based on a similarity measure in such a multilingual representation space. Well known approaches are LASER or LABSE, but both are limited to about 100 languages. In this talk, we report our work to extend multilingual sentence representations to 200 languages. We discuss challenges when handling low-resource languages, in particular collection of resources to train and evaluate such models. Those sentence encoders are used to mine more than one billion sentences of bitexts in 148 languages. Finally, we report the impact of the mined data when training a massively multilingual NMT system. Bio: Alice Oh is a Professor in the School of Computing at KAIST. She received her MS in 2000 from Carnegie Mellon University and PhD in 2008 from MIT. Her major research area is at the intersection of natural language processing and computational social science. Within natural language processing, she studies various models designed for analyzing written text including social media posts, news articles, and personal conversations. She has served as Tutorial Chair for NeurIPS 2019, Diversity & Inclusion Chair for ICLR 2019, and Program Chair for ICLR 2021. She is serving as Program Chair for NeurIPS 2022 and General Chair for ACM FAccT 2022. Abstract: Korean and Indonesian are considerably different from English, and both have recently gained attention from researchers building datasets for a diverse set of languages. I will start this talk by sharing our research on ethnic bias in BERT language models in six different languages which illustrates the importance of studying multiple languages. I will then describe our efforts in building datasets for Korean and Indonesian and the main challenge of dataset building when the sources of data are much smaller compared to English and other major languages. I will also share our research on historical documents written in ancient Korean which is not understood and must be translated into modern Korean. |Kathleen McKeown||Graham Neubig||Alice Oh||Andre Niyongabo Rubungo| |Sebastian Ruder||Holger Schwenk||Avi Sil| |Akari Asai||Eunsol Choi||Jonathan H. Clark||Junjie Hu| |Chia-Hsuan (Michael) Lee||Jungo Kasai||Shayne Longpre||Ikuya Yamada|
OPCFW_CODE
What's the difference between Dart's analyzer and linter? You must have seen inside a analysis_options.yaml: analyzer: exclude: [build] strong-mode: implicit-casts: false linter: rules: - camel_case_types Are both used at compile time? What are the differences between the two? The short answer is that they have focus on different things. The goal of the analyzer is to check that your program is valid. It checks for syntax errors and type errors. Historically, in Dart 1, it was the only way to get type checking because compilers ignored types, but in Dart 2 that's no longer the case. The analyzer ended up added more checking than what the language required. It can detect dead code or definitely wrong assignments, even when the language allows them, because it has a better static analysis than the language specification requires. In general, the analyzer warns about invalid programs or likely problems. Some warnings are enabled by default, and other you need to enable, because they could lead to false warnings. The gravity of each problem problem can be configured as error, warning, hint (or ignore). Being invalid Dart is always an error. The linter is developed as a separate project. It only works on valid Dart programs, and it is intended to enforce a coding style. The language doesn't care whether your classes are Capitalized and your variables are lowerCase, but the style guide says they should be, and the linter can enforce that style by reporting a lint error if it's not satisfied. That's what the linter does: It reports style violations. Since style is subjective, all lints need to be enabled, there are no lints enabled by default. The lints can also be very specific. There are lints which only apply to code using specific libraries, in order to enforce a specific style for that code. A project like Flutter might enable some lints by default in packages that it creates. The analyzer existed before the linter, and some warnings added to the analyzer would perhaps have been made lints if they were added today. Both depends on annotations in package:meta for adding metadata to drive warnings/lints. The analyzer now includes the linter and provides errors/warnings/hints/lints from both, so a programmer will rarely need to make a distinction. The main difference is that lints are documented in the linter repository and discussions about new lints happens there, independently of changes in the analyzer. The Dart package pedantic defines a set of lints which are used for all internal Google code. It's very strict and opinionated, with the goal of preventing both potentially dangerous code and any unnecessary style discussion. Other packages provide other sets of lints. There is no official set of lints recommended by the Dart team (yet), as long as you follow the style guide. Edit: There now are official lints recommended by the Dart team: package:lints. While this rings true somehow, the linter and the analyser have exactly the same set of rules. So, the above still doesn't really explain the situation. As stated, the analyzer includes the linter and all its lint rules. It also has more. There is no stand-alone linter program, so the linter can't have the same set of rules as the analyzer. (It's just not always easy to tell which is which.)
STACK_EXCHANGE
If your home has solid walls, you could save between £115 and £360 a year by installing solid wall insulation. About a third of UK homes have solid walls, according to the National Insulation Association. It estimates that 45% of the heat from these homes could be escaping through walls. We've worked with the Royal Institution of Chartered Surveyors*, which publishes average building work and repair costs, to bring you the average cost for external solid wall insulation. We've also split these so you can look at the average costs for a terraced, semi-detached and detached house. Do bear in mind that costs will vary, depending on where you live in the country. Table notes: This includes, 100mm expanded polystyrene insulation (EPS) board to external brick wall, prime and render. How much money solid wall insulation will save you each year will depend on the type and size of your home. The chart below show the average reductions to heating bills and CO2 emissions for homes of differing sizes. Remember that the price you'll pay for external wall insulation will be affected by the condition of your walls and whether other building work or repairs will be taking place at the same time, as well as your property size. Solid wall insulation is more expensive than cavity wall insulation, but it should lead to bigger savings on heating bills. Solid wall insulation can be applied to either the inside or outside of solid walls. A professional installer should be able to advise you on which option is most suitable for your home. Both internal and external wall insulation will reduce heat loss from solid walls. The type you choose will be based on factors including: External solid wall insulation is usually installed when a building has severe heating problems or already requires some form of external repair work. Installation involves fixing an insulating material to external walls, with a protective render and/or decorative cladding over the top, so it will affect your home's external appearance. The thickness of the insulation needs to be between 50-100mm. External insulation is generally more expensive than internal, though you'll avoid the significant re-decorating that comes with internal insulation. Once your external insulation is fitted, decorative coatings and cladding can be used to improve your home's kerb appeal. This can match a wide variety of homes, including Georgian, Victorian and Edwardian properties. Internal solid wall insulation usually involves fitting ready-made rolls or boards of insulating material to the inside walls of your house. This can be disruptive – you'll need to move plug sockets, radiators and fitted furniture, and redecorate your walls. Your walls will need to be carefully prepared before internal wall insulation can be fitted. Any damaged plaster needs to be either repaired or removed, and bare brickwork should be treated to eliminate areas where air can escape. The extra thickness of insulated walls will reduce your floor space ever so slightly. However, this option is usually cheaper than external wall insulation and can be installed on a room-by-room basis. You can find out more information about solid wall insulation, including how to find an installer, from the NIA and the Insulated Render and Cladding Association websites. *To arrive at the average prices above, RICS uses cost data from its Building Cost Information Service (BCIS) database, where costs are collated from a variety of sources and analysed. Materials costs are based on the best trade prices from a range of suppliers across the UK, which are then benchmarked to reveal the best national average. Labour rates are based on the current Building and Allied Trades Joint Industrial Council wage agreement. Data copyright RICS 2020, reproduced with permission. Data is current as of October 2020.
OPCFW_CODE
How you solve P4(s)+5O2(g)------>P4O10(g) .if 2.50 grams of phosphorus is ignited in a flask how many grams of P4O10 are formed ? 1. mols P = grams/molar mass 2. Convert mols P to mols P4O10 using the coefficients in the balanced equation. 3. Convert mols P4O10 to g. g = mols x molar mass. To determine the grams of P4O10 formed, we need to use stoichiometry, which involves the use of balanced chemical equations. The balanced equation for the reaction is: P4(s) + 5O2(g) → P4O10(g) The molar ratio between P4 and P4O10 is 1:1, meaning for every 1 mole of P4, we will have 1 mole of P4O10. To solve the problem, follow these steps: Step 1: Convert grams of P4 to moles. We will use the molar mass of P4 to convert grams to moles. The molar mass of phosphorus (P) is 31 grams/mole. Using the formula: Moles = Mass / Molar mass Moles of P4 = 2.50 g / (31 g/mol) = 0.0806 mol Step 2: Use the stoichiometric coefficients to determine the moles of P4O10. Since the molar ratio between P4 and P4O10 is 1:1, the moles of P4O10 will be equal to the moles of P4. Moles of P4O10 = 0.0806 mol Step 3: Convert moles of P4O10 to grams. To convert moles of P4O10 to grams, we will use the molar mass of P4O10. The molar mass of P4O10 is 283.9 g/mol. Using the formula: Mass = Moles × Molar mass Mass of P4O10 = 0.0806 mol × (283.9 g/mol) = 22.9 g Therefore, 2.50 grams of phosphorus will produce 22.9 grams of P4O10. To solve this problem, we need to use stoichiometry, which is based on the balanced chemical equation. The balanced chemical equation for the reaction given is: P4(s) + 5O2(g) → P4O10(g) From this equation, we can see that 1 mole of P4 reacts to form 1 mole of P4O10. This means the molar ratio between P4 and P4O10 is 1:1. First, let's calculate the number of moles of phosphorus (P4) using its molar mass. The molar mass of phosphorus is 31.0 g/mol. Number of moles of P4 = mass of P4 / molar mass of P4 = 2.50 g / 31.0 g/mol = 0.0806 mol (rounded to four decimal places) Since the molar ratio between P4 and P4O10 is 1:1, the number of moles of P4O10 formed will also be 0.0806 mol. Now, let's calculate the mass of P4O10 using its molar mass. The molar mass of P4O10 is calculated by summing up the molar masses of the elements in P4O10. Molar mass of P4O10 = (molar mass of P4) + 10 × (molar mass of O) = 123.9 g/mol Mass of P4O10 = number of moles of P4O10 × molar mass of P4O10 = 0.0806 mol × 123.9 g/mol = 10.0 g (rounded to one decimal place) Therefore, if 2.50 grams of P4 is ignited, approximately 10.0 grams of P4O10 will be formed.
OPCFW_CODE
Any impact of I change owner of a Windows system folder? If I change the owner for a folder under C:\windows\system32 to Administrator, will there be any negative impacts to system and application funtionality? Which folder? Better yet, why not start by telling us you feel it's necessary to change the ownership in the first place? C:\windows\system32\drivers\etc, and it is necessary because my application needs to access read/write HOSTS file under this folder. Any impact you think if I change owner to an administrator account which my specific application runs on? That should be safe enough but I'm always nervous about software that wants to change the hosts file. Use DNS instead if at all possible. Thanks John! I am testing against DNS using my specific application, so... About your conclusion -- "should be safe", could you provide why you make such conclusion please? Is this program released to anyone else other than so? If so you should clearly point out your doing this in case it is causing issues, and if possible try find another way to do. Why does you application need both read and write access to the hosts file anyway? why are you not just granting rights to that specific file and not the folder? "Is this program released to anyone else other than so" -- just run in my private intranet servers to do some network problem analysis utility (diagnose network issues, like DNS issue). In this case, any impact if I change the owner of etc folder? Hi Antitribu, I want to keep it flexible and extensible in the future to grant permission once on folder and if I need to access other files in etc folder (in the future), I do not need to grant again. In my scenario, any impact if I change owner of etc folder? "eed both read and write access to the hosts file anyway" -- I write it to test some ability of resolve address locally other than from DNS. After test, I will recover HOSTS file original content. The "feature" you're butting heads with is Windows Resource Protection, added initially in Windows Vista. In this case, it's an ACL that, in previous version of Windows, granted "Administrators" "Full Control" permission but, in Vista and newer versions, prevents "Administrators" from modifying the ACL on the "%SystemRoot%\system32\drivers\etc" folder itself. Odds are good that making the change in ownership, so long as you don't mess with the "SYSTEM" and "TrustedInstaller" permissions, probably won't cause operational issues. I just verified with a Windows 7-based PC that HOSTS-based name resolution continues to work with the owner of the "...\etc" folder changed to "Administrators" and the "Administrators" permission set to "Full Control". I've read your other questions, and I see that you're trying to manipulate the HOSTS file programmatically. I'd strongly caution you not to do what you're trying at all. In this day and age, there's no good argument for using HOSTS file-based name resolution for anything. Run a DNS server and make your changes there. If you need to "override" your production DNS for a "test environment" put up a second DNS server that hosts authoritative zones for any RRs that need to be "overridden". You can "diagnose DNS issues" with tools like "nslookup", win32 ports of dig, and sniffers. Using HOSTS file-based name resolution isn't a useful method for "diagnosing DNS issues". Making this change to stock folder permissions puts your machine(s) into a non-default state that Microsoft may not test for in deployment of future updates. While things appear to "work" today that's no guarantee that future updates won't cause problems because of assumptions in system folder permissions that such updates might make. Cool, question answered!
STACK_EXCHANGE
Intro to Object Oriented Programming By Bernd Klein. Last modified: 01 Feb 2022. This section of our Python tutorial deals with object-oriented programming, usually abbreviated as OOP. It is difficult to summarize the essence of object orientation in a few sentences: Object Oriented Programming (OOP) is a programming paradigm based on the concept of "objects" that can contain data and code. The data is often implemented as attributes. Functions implement the associated code for the data and are usually referred to in object oriented jargon as methods. In OOP, computer programs are designed by being made up of objects that interact with each other via the methods. It was difficult for us to decide whether to add object-oriented programming to the beginner or the advanced level sections of our Python tutorial. There are some who think it's best to combine learning Python with OOP from the start. This is vital in programming languages like Java. Python can be used without programming in an OOP style. Many beginners to Python prefer this, and if they only want to write small to medium-sized applications, this is good enough. However, for larger applications and projects, it is recommended to look into OOP. The following chapters describe almost all aspects of Python OOP. We decided to introduce the basics of Python without going directly into object-oriented programming. Therefore, these chapters assume that you are familiar with the basics of Python. Live Python training Enjoying this page? We offer live Python training courses covering the content of this site. Upcoming online Courses In this chapter |1. Object Oriented Programming |General introduction in object-oriented Programming and the way it is used in Python |2. Class vs. Instance Attributes |Object-oriented programming in Python: instance attributes vs. class attributesand their proper usage. |3. Properties vs. Getters and Setters |Object oriented programming in Python: instance attributes vs. class attributesand their proper usage. |4. Creating Immutable Classes In Python |Explore Python's immutable classes for enhanced data integrity on your website. Learn the benefits of immutability with examples! |5. Dataclasses In Python |Dive into the power of Python's dataclasses on this page. Simplify class creation, enhance readability, and embrace efficient data management |6. Implementing a Custom Property Class |Python Class implementing a custom property class. |7. Magic Methods |Python Tutorial: Magic methods and operator overloading with examples. __call__ method to turn class instances into callables |8. Dynamic Data Transformation |Discover Dynamic Data Transformation in Python through an extensive course example featuring the Product class, |9. Introduction to Descriptors |Introduction to descriptors. Defining descriptors, summarizing the protocol, and showing how descriptors are called. |Tutorial on Python: Inheritance |11. Multiple Inheritance |Object-Oriented Programming in Python: Covering Multiple inheritance, the diamond problem, MRO and polymorphism in Python'' |12. Multiple Inheritance: Example |Extensive example of multiple inheritance in Python |13. Callable Instances of Classes |Python Tutorial: Callables in Python and class instances which can be used like functions. Introduction in the __call__ method |14. Slots: Avoiding Dynamically Created Attributes |Slots in Python: A way to prevent the dynamical creation of attributes and to save memory space in certain cases |15. Polynomial Class |Python Class implementing polynomial functions. |16. Dynamically Creating Classes with type |Relationship between classes and type for advanced programmers: deeper insight into what happens when we define a class or create an instance of a class. |17. Road to Metaclasses |Incentive and motivation for learning and using metaclasses. Example classes,which could be designed by using metaclasses |Tutorial on Metaclasses, theory, usage and example classes using metaclasses |19. Count Function calls with the help of a Metaclass |Use Cases for Metaclasses: Counting Function Calls |20. The 'ABC' of Abstract Base Classes |Abstract Classes in Python using the abc module |21. OOP Purely Functional |Introduction in writing functions in OOP style, connection between functional programming and OOP
OPCFW_CODE
The launch of Microsoft’s hosted software for small businesses is just around the corner, the company said. In an e-mail sent to registered beta testers on Tuesday, Microsoft promised the beta was close at hand. It also explained the three different Office Live packages it designed for small business owners. Microsoft Office Live Basics provides domain-name registration, site-design tools, Web hosting, Web site traffic reports and a still-to-be-determined number of e-mail accounts. Microsoft Office Live Collaboration is a hosted version of Windows SharePoint. It lets a business create shared, password-protected collaboration sites. The offering includes online business applications to manage customer, project, sales and company information. Microsoft Office Live Essentials combines Office Live Basics and Office Live Collaboration, adding additional company e-mail accounts, enhanced access features, advanced Web traffic reports, and Microsoft Office FrontPage Web design software. Microsoft hasn’t provided details about pricing for Office Live; the e-mail said the services would be free during the beta period. Windows Live Expo seems to be a Craigslist-style local marketplace married to MSN communication and mapping. Microsoft will roll out local expos market by market; users will be able to buy, sell, and communicate with the people in their local marketplaces or across the country. As previously reported, Expo will be integrated with Windows Live Local, which will provide local search and mapping capabilities. Each listing will have a map button that allows users to see its location, along with a slider to widen or narrow the area of proximity in which items should be located. Expo will connect with Windows Live Messenger and Windows Live Mail, two offerings already available in beta. (Microsoft plans to eventually migrate all Hotmail users to Windows Live Mail, which is based on AJAX.) In addition, Live Contacts lets them publish personal and business information to specified contacts; subscribers will automatically see any changes. Eventually, users will be able to turn on presence capabilities, so that they can see which contacts are available. Microsoft also is planning to enable Voice over IP Live Messenger and Mail will let users call their contacts directly using their centralized address book. All phone numbers in Live Contacts will be hyperlinks that, when clicked, will launch Windows Live Call. In Windows Live Expo, potential buyers and sellers will be able to use Windows Live Call to discuss transactions. MSN also plans to enable Free Call, a click-to-call capability of MSN Local Search that will let consumers speak to advertisers free of charge. Free Call likely is based on technology Microsoft acquired with its purchase of Teleo last August. , the leader in search advertising, also is testing a click-to-call advertising service. Microsoft first announced the Live hosted offerings in November 2005.
OPCFW_CODE
Qualys Company revealed vulnerability (CVE-2021-4034) in the system component Polkit (formerly PolicyKit), used in distributions to arrange for unprivileged users to perform actions that require elevated access rights. The vulnerability allows an unprivileged local user to elevate their privileges to the root user and gain full control over the system. The issue has been codenamed PwnKit and is notable for producing a working exploit that runs in the default configuration on most Linux distributions. The problem exists in the pkexec utility included with PolKit, which comes with the SUID root flag and is designed to run commands with other user’s privileges according to PolKit’s rules. Due to the incorrect handling of command-line arguments passed to pkexec, an unprivileged user could bypass authentication and have their code run as root, regardless of the access rules set. For an attack, it does not matter what settings and restrictions are set in PolKit, it is enough that the SUID root attribute is set for the executable file with the pkexec utility. Pkexec does not check the correctness of the command line argument count (argc) passed when starting a process. The developers of pkexec assumed that the first entry in the argv array always contains the name of the process (pkexec), and the second entry is either NULL or the name of the command run via pkexec. Since the argument counter was not checked against the actual contents of the array and it was assumed that it is always greater than 1, in the case of passing an empty array argv to the process, which the function allows execve on Linux, pkexec treated NULL as the first argument (process name), and the memory beyond the end of the buffer as the next array content. |---------+---------+-----+------------|---------+---------+-----+------------| | argv | argv | ... | argv[argc] | envp | envp | ... | envp[envc] | |----|----+----|----+-----+-----|------|----|----+----|----+-----+-----|------| V V V V V V "program" "-option" NULL "value" "PATH=name" NULL The problem is that following the argv array in memory is the envp array containing the environment variables. Thus, given an empty argv array, pkexec retrieves the command being run with elevated privileges from the first element of the environment variable array (argv became identical to envp), the content of which can be controlled by an attacker. Having received the value of argvpkexec tries to determine the full path to the executable file taking into account the file paths in PATH and write the pointer to the string with the full path back to argvwhich causes the value of the first environment variable to be overwritten as well, since argv identical to envp. By manipulating the name of the first environment variable, an attacker can substitute another environment variable in pkexec, for example, substitute the “LD_PRELOAD” environment variable, which is not allowed in suid programs, and arrange for the process to load its shared library into the process. The working exploit uses the GCONV_PATH variable substitution, which is used to determine the path to the symbol transcoding library that is dynamically loaded when calling the g_printerr() function, which uses iconv_open() in its code. By redefining the path in GCONV_PATH, the attacker can achieve loading not the regular iconv library, but his own library, the handlers from which will be executed during the error message at the stage when pkexec is still running as root and before the launch permissions check. It is noted that although the problem is caused by memory corruption, it can be reliably and repeatably exploited regardless of the hardware architecture used. The prepared exploit has been successfully tested on Ubuntu, Debian, Fedora and CentOS, but can be used on other distributions as well. The original exploit is not publicly available yet, indicating that it is trivial and can be easily recreated by other researchers, so it is important to install the hotfix update as soon as possible on multi-user systems. Polkit is also available for BSD systems and Solaris, but has not been explored for exploitation. What is known is that the attack cannot be carried out in OpenBSD, since the OpenBSD kernel does not allow passing a null argc value when calling execve(). The problem has been present since May 2009, since additions pkexec commands. Vulnerability fix in PolKit is still available in the form patch (corrective release not formed), but because the distribution developers were aware of the problem in advance, most distributions published an update at the same time they disclosed the vulnerability. Issue fixed in RHEL 6/7/8, Debian, ubuntu, openSUSE, SUSE, Fedora, ALT Linux, ROSA, Gentoo, Void Linux, Arch Linux and Manjaro. As a temporary measure to block the vulnerability, you can remove the SUID root flag from the /usr/bin/pkexec program (“chmod 0755 /usr/bin/pkexec”). Addendum 2: About a possible vulnerability in PolKit related to the handling of empty arguments and the pkexec suid application, reported back in 2013, but since the researcher did not bring the idea to a working exploit, the warning was ignored and the problem was not fixed.
OPCFW_CODE
The year 2022 is destined to be one of data scraping. Businesses compete against one another using large amounts of data from a diverse range of consumers. So, whether it’s their consumer actions, shared social media content, or celebrities they follow, all are looking for scraping solutions. As a result, you must invest in developing your data assets to be successful. Numerous firms and industries remain vulnerable to data breaches. According to a 2017 poll, 37.1% of businesses lack a Big Data strategy. Among the remaining businesses that are data-driven, only a tiny proportion have achieved some kind of success. One of the primary causes is their limited understanding of or a complete lack of data technology. As a result, data scraping software is critical for establishing a data-driven business strategy—scrape websites with Python, Selenium, or PHP. Additionally, it is advantageous if you are a good programmer. This article will cover data scraping tools to automate the scraping process. Table of Contents 10 Best Data Scraping Tools on the Web Octoparse is a free and feature-rich web scraper. It’s quite nice of them to offer endless free pages! Octoparse resembles the human scraping process, making the entire scraping process incredibly simple and smooth to handle. It’s acceptable if you have no prior knowledge of programming. Regex and XPath tools can be used to aid in exact extraction. It’s typical to come across a website with a messed-up coding structure, as humans build websites, and human beings make mistakes. In this instance, it’s easy to overlook these outliers during data collection. Even when scraping dynamic pages, XPath can resolve 80 percent of data missing difficulties. However, not everyone is capable of writing the correct Xpath. This is unquestionably a life-saving feature, courtesy to Octoparse. Additionally, Octoparse includes pre-built templates for Amazon, Yelp, and TripAdvisor. Scraped data will be exported to Excel, HTML, and CVS, among other formats. Guidelines and YouTube lessons pre-built, built-in job templates, unlimited free crawls, Regex tools, and Xpath. Whatever you choose to call it, Octoparse has more than enough incredible features. Regrettably, Octoparse does not yet support PDF data extraction or direct picture download (only can extract image URLs). Mozenda is a data scraping service that runs in the cloud. It has a web console and an agent builder that enable you to run your agents and view and organize results. Additionally, it allows for exporting or publishing extracted data to a cloud storage provider such as Dropbox, Amazon S3, or Microsoft Azure. Agent Builder is a Microsoft Windows tool that enables you to create your data project. Data extraction occurs on optimized harvesting servers located in Mozenda’s data centers. As a result, this uses the user’s local resources and protects the user’s IP addresses from being blacklisted. The tool includes a full Action Bar that makes recording AJAX and iFrames data simple. Additionally, it offers documentation and pictures extraction. Apart from multi-threaded extraction and intelligent data aggregation, Mozenda includes Geolocation to avoid IP blocking and Test Mode and Error-handling to help you fix errors. Mozenda is somewhat expensive, starting at $99 for 5000 pages. It requires a Windows-based computer to work and experiences difficulty when dealing with huge web pages. Perhaps it is why they charge based on scraped pages. 80legs is a very configurable web crawling tool. While it is intriguing that you may customize your program to scrape and crawl, caution is advised if you are not a tech-savvy individual. When customizing your scrape, ensure that you understand each step. The tool can retrieve massive volumes of data and provides an instant download option for the retrieved data. Additionally, it’s rather remarkable that the free plan allows you to crawl up to 10,000 URLs per run. 80legs makes web crawling technologies more affordable for small businesses and individuals on a shoestring budget. To obtain a large volume of data, you must configure a crawl and a pre-built API. The support team is inefficient. Import.Io is a cross-platform data scraping platform that works with most OS systems. It features an intuitive interface that is simple to master without the need to write any code. You can click on any data that appears on a webpage and extract it. The data will be saved on the company’s cloud service for days. It is an excellent enterprise choice. Import.io is a user-friendly application that runs on nearly every operating system. Thanks to its basic layout, simple dashboard, and screen capture, it’s rather simple to use. The free plan has been discontinued. Each sub-page is chargeable. It can quickly become prohibitively expensive if you extract data from multiple sub-pages. Paid plans start at $299 per month for 5000 URL queries and $4,999 per year for 500,000. As the name implies. Content Grabber is a robust, feature-rich visual data scraping application for extracting web content. It can automatically collect entire content structures such as product catalogs or search results. Visual Studio 2013 combined with Content Grabber provides a more effective solution for those with solid programming skills. With various third-party solutions, Content Grabber provides consumers with additional possibilities. Content Grabber is exceptionally adaptable in handling complex websites and data extraction. It enables you to customize the scraping to your specifications. The software is only compatible with Windows and Linux operating systems. Due to its high adaptability, it may not be the best choice for beginners. Additionally, it lacks a free version. The eternal price of $995 deters those looking for a tool for small projects on a shoestring budget. Outwit Hub is one of the most straightforward data scraping tools available. It is free to use and enables you to extract web data without writing a single line of code. It is available as a Firefox add-on as a desktop application. Its straightforward UI makes it ideal for beginners. The “Fast Scrape” tool is an excellent addition that allows you to scrape data from a list of URLs you give quickly. The extraction of primary site data does not include advanced capabilities such as IP rotation and CAPTCHA skipping. Without IP rotation and circumventing CAPTCHAs, your scraping task may fail. Due to the ease with which a high volume of extraction will be discovered, websites will compel you to pause and restrict you from doing further action. ParseHub is a desktop application that runs on Windows. Compared to other web crawling applications, ParseHub runs on most operating systems, including Windows, Mac OS X, and LINUX. Additionally, it includes a browser extension that enables quick scraping. Pop-ups, maps, comments, and photos can all be scraped. The instructions are comprehensive, which is a massive plus for beginning users. For programmers with API access, Parsehub is more user-friendly. It supports a broader range of operating systems than Octoparse. Additionally, it is quite versatile for scraping data online for various purposes. On the other hand, the free plan is severely limited in terms of scraped pages and projects, with only five projects and 200 pages scraped per run. Their subscription plan is expensive, ranging between $149 and $499 per month. Scrapes with a high volume of material may cause the scraping operation to slow down. As a result, small projects fit well with Parsehub. Additionally, it has a limited data retention term; therefore, ensure that you store extracted data promptly. Zyte is a smart proxy and data scraping services provider. It doesn’t matter what the project size is, Zyte can extract any data you want. The platform makes any complex work simple, so you can focus on your team and business planning. The pricing model is flexible, so you only pay for the data you need and save on unnecessary expenses. You are in safe hands as Zyte is GDPR and CCPA compliant with 99.9% data quality, reliability & accuracy. Zyte’s price intelligence feature helps in tracking price for different products on competitor’s website. Dexi.io is a web crawler that runs in the browser. It offers three sorts of robots: extractors, crawlers, and pipelines. PIPES includes a master robot capability that enables a single robot to manage several jobs. It integrates readily with various third-party services (captcha solvers, cloud storage, and so on). Third-party services are unquestionably a plus for skilled scrapers. The outstanding support team assists you in building your robot. The cost is pretty reasonable, ranging between $119 and $699 each month, depending on your crawling capability and the number of active robots. Additionally, the flow is somewhat tough to comprehend. Occasionally, debugging bots can be a pain. - Top Mobile VPN Apps – Safe & Secure with Free Trial - Linkedin Scraping tools – Scraper API for Linkedin Public Data Extraction - Social media scraper tools worldwide Which is the best data scraping tool? Apart from the ones mentioned in this, Bright Data and Oxylabs are some of the top web scraping solution provider. What is the use of a web scraping tool? A web scraping tool crawls different websites and provides insightful data with anonymity. The tool helps in product and price comparison. The open web is by far the most significant worldwide repository of human knowledge, and nearly no material is not accessible via web data extraction. Because online scraping is performed by many people with varying degrees of technical aptitude and knowledge, numerous solutions are available. There are web data scraping options for everyone, from non-programmers to seasoned developers looking for the best open source solution in their preferred language. There is no one-size-fits-all web scraping technology; it all relies on your specific requirements. Hopefully, this list of web data scraping tools and services has assisted you in identifying the best web data scraping tools and services for your unique projects or businesses. Numerous the scraping solutions mentioned above offer free or discounted trial periods, allowing you to determine whether they will work for your specific company use case. Having said that, some will be more dependable and effective than others. If you’re searching for a tool that can manage data requests at scale and at a reasonable price, it’s worth contacting a sales representative to ensure that they can deliver – before to signing any contract.
OPCFW_CODE
ESO Mod is a simple tool designed to expose some of the (so far camera/graphical) settings that are otherwise unavailable in The Elder Scrolls Online. It was inspired by ESO Launcher (by Sorien) which is an excellent alternative to this tool, but I wanted to release an open source variant and also to extend the functionality even further. Unfortunately my tool isn’t as user-friendly as Sorien’s, but I will improve that if there’s sufficient demand. - Set field of view - Set max camera zoom distance - Set tone mapping type (a.k.a shader filters) - Set min/max/current view distance - Set current time (e.g. change from day to night) - Toggle fog - Toggle 3D - Toggle fader (a.k.a. force high quality models) - Works on both the live and PTS clients - Nothing really yet. Whatever else people suggest and I feel like investigating. This tool will not run the game for you, so ensure the game is running before attempting to use it. If a ‘launcher’ style of functionality is desirable please let me know and I will consider support that. - Download the latest release and extract it anywhere (preferably into its own folder – to prevent it picking up the wrong DLLs – but not required). - Open a command prompt window (cmd.exe) and navigate to the directory containing the tool (alternatively hold shift and right click on the directory in Explorer and use “Open command window here”). - Run “esomod.exe –help” (or “esomod.exe -h”) to print the help and get a list of commands. - Run ESO Mod again with the options you desire (e.g. “esomod.exe –max-view-dist 4.5 –view-dist 4.5 –max-camera-dist 20 –fov 65 –fog” to increase the maximum view distance, set the current view distance to the new maximum, increase the max camera zoom distance, and toggle the fog). You typically should not need to run this tool as admin unless you’re also running ESO as admin (which you normally should not do). If you get errors about missing CRT DLLs you probably need to download and install the Visual C++ Redistributable Packages for Visual Studio 2013 (I recommend installing both the x86 and x64 flavours, but only the x86 flavour is required in order for ESO Mod to function). Please note that for technical reasons XP is unsupported, and this tool will not even load on it. It could be made to work, but it involves work which I’m not at all interested in doing given that XP is EOL. (Please note that if a new patch is out and this isn’t updated yet then don’t assume it won’t work, it should continue to function without updates for most patches.) 20140603-2259: Initial release. 20141123-2355: Update for eso.live.220.127.116.113508. Fixes FoV mod. 20141227-0034: Update for eso.live.18.104.22.1681867. Fixes ‘fog’ and ‘fader’. Feature requests, bug reports, code patches, interesting memory addresses, etc: I can be contacted through on a variety of mediums, but probably the best for this would be either the hadesmem project issue tracker or directly via email (email@example.com). Anything else: Email me. (firstname.lastname@example.org) This is a personal project and is not endorsed by my employer. Project released under the MIT license. The MIT License (MIT) Copyright (c) 2012-2014 Joshua Boyce Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the “Software”), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions: The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software. THE SOFTWARE IS PROVIDED “AS IS”, WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
OPCFW_CODE
Lolu chungechunge lwabekwa kunqolobane. Uyacelwa ubuze umbuzo omusha uma udinga usizo. Mozilla keeps crashing, why? I uninstalled it, but when I try to reinstall it-it crashes after saying, "Finished checking Yahoo! toolbar. In the end I uninstalled Mozilla and deleted all I could of it. I downloaded and reinstalled it again and now its fine.Funda le mpendulo ngokuhambisana nalesi sihloko 👍 0 All Replies (14) Did you download from the official Mozilla website? If not, please uninstall Firefox from the Control Panel > Programs and Features/Add or Remove Programs Then delete the Mozilla Firefox folder: - (32-bit computer) C:\Program Files\Mozilla Firefox\ - (64-bit computer) C:\Program Files (x86)\Mozilla Firefox\ And then reinstall. If you are still having problems, are you able to start in safe mode? Firefox Safe Mode is a troubleshooting mode that turns off some settings, disables most add-ons (extensions and themes). If Firefox is not running, you can start Firefox in Safe Mode as follows: - On Windows: Hold the Shift key when you open the Firefox desktop or Start menu shortcut. - On Mac: Hold the option key while starting Firefox. - On Linux: Quit Firefox, go to your Terminal and run firefox -safe-mode (you may need to specify the Firefox installation path e.g. /usr/lib/firefox) When the Firefox Safe Mode window appears, select "Start in Safe Mode". If the issue is not present in Firefox Safe Mode, your problem is probably caused by an extension, and you need to figure out which one. Please follow the Troubleshoot extensions, themes and hardware acceleration issues to solve common Firefox problems article to find the cause. To exit Firefox Safe Mode, just close Firefox and wait a few seconds before opening Firefox for normal use again. Some added toolbar and anti-virus add-ons are known to cause Firefox issues. Disable All of them. You can check if you can start Firefox in Safe Mode by holding down the Shift/Options key. The shift key got me to safe mode but thats it. The problem can't be extensions because I've had them long before Mozilla started crashing. I uninstalled it, but when I tried reinstalling it-it would crash repeatedly while checking for compatible extensions. Don't know if it mean anything but it always crashed after checking the yahoo toolbar. Lastly, I did get it from the official site. Sorry to hear that your Firefox seems to be crashing. Please perform the following steps to give us a crash report ID that helps us find out more about the cause of the crash. - Press the following shortcut to get a Run window: [Windows] + [R]. This should bring up a window that contains a text field. - In that text field, enter %APPDATA% and press Enter. An explorer window should open. - From that explorer window, double-click on the Mozilla folder, then double-click on Firefox and then on Crash reports. Double-click on submitted. - Now, you should see a list of files that contain reports. Go to View > Arrange Icons by > Modified to get the most recent files at the top of the window. - Open the most recent 5 files with a text editor and copy the IDs. - Paste each ID with bp- into the reply window on the forums. Thanks in advance! Curtis Parfitt-Ford Mozilla Support Tried to paste but didn't work, hope this helps. Having looked on the system the only things I can find about this that have not been patched are related to the RoboForm plugin. Do you have RoboForm? If so, could you try removing it? Curtis Parfitt-Ford Mozilla Support Some of your crash reports weren’t sent to the Mozilla Servers. In the address bar, type about:crashes. Note: If any reports do not have BP in front of the numbers, click it and select Submit. Using your mouse, mark the most resent 7 - 10 crash reports, and copy them. Now go to the reply box below and paste them in. Don't have RobForm. Since I'm having trouble figuring out how to paste crash reports I had to do it manually. bp-6f4804fc-15c0-473c-9134-5f7012141101 bp-752ad047-8264-4b0c-896c-513212141101 bp-bb7fc1e6-6ba3-4ef7-b596-426bf2141101 bp-752ad047-8264-4b0c-896c-513212141101 bp-589df20c-6b11-42aa-bb1d-55ca62141101 bp-3ef75e97-fdec-406e-b738-1e766214110 bp-1ff3e8de-4d41-4f6a-bc27-1caa52141101 bp-2543ba-bffa-4037-85b0-a04d82141101 Again hope this is helpful. The crash reports say you are using Firefox v30. Please upgrade. You can check if you can start Firefox in Safe Mode by holding down the Shift/Options key and see if disabling hardware acceleration in Firefox helps. - Tools > Options > Advanced > General > Browsing: "Use hardware acceleration when available" You need to close and restart Firefox after toggling this setting. In the end I uninstalled Mozilla and deleted all I could of it. I downloaded and reinstalled it again and now its fine.
OPCFW_CODE
Modern web development techniques are evolving at an astounding rate. You only have to look at what’s on the cutting edge to realise that quite frankly, digital marketers who lack interest in the front end web development and design community will eventually be at a disadvantage. For marketers there is no such thing as “too technical”. The bottom line is this: you work on the web, you need to understand the web. Learn how it’s built, and you’ll find yourself able to make great things happen on your own, simply by learning and experimenting. How cool. Many of you have been SEO practitioners for a long time, and I’ll wager that if you’re like me, interesting new stuff is what keeps you excited about SEO, particularly in the field of what’s possible in technical SEO and content development. Take a look at this (and pay attention to the address bar): Quartz uses pushState to change the visible URL as new content is loaded on the web page. PushState is part of the HTML 5 History API — a set of tools for managing state on the browser. So, the address bar is updated to match the specified URL without re-loading the page. Using pushState for user experience with infinite scroll As the user scrolls down the page, the visible URL changes and new content is loaded using Ajax. It’s *because* new content is loaded (infinite scroll) that the ideal way to handle the address bar is to update the visible URL. That way, a reader can share the right thing or return to the right place. This is provided, of course, that share URLs are updated in the HTML source by manipulating the DOM. Something easily possible with jQuery. The problem with the SEO part of this Of course, your solution is to be mindful of the non-JS experience you’re serving. Taken to its conclusion, applying the principles of graceful degradation in the early stages of layout, design and code go a long way dealing with that. Take a look at this Porsche category page from our friends at Pond5. Note how the JS-enabled view uses infinite scroll to continually add new content to the page. If you scroll down to the bottom of the page, you’ll receive all the content. That won’t happen if you’re a crawler, so that’s why you need to think about your non JS experience, too: Pond5 have inserted paginated links in their non JS view. The analytics / tracking problem with this Loading content via Ajax poses a problem with your page tracking though; even though you’re updating the URL in the browser address bar, your Google Analytics won’t see this as a separate page view. However, this can be easily be solved by firing _trackPageView at the same time the URL is updated. If you’re not using GA, obviously the call will be slightly different for you, but all the main analytics providers offer similar functionality for manually sending the call to track the page. Hopefully that helps resolve any areas of uncertainty you’ve found in your day-to-day technical consulting involving pushState and infinite scroll! Let me know if you have any questions in the comments below! https://developer.mozilla.org/en-US/docs/Web/Guide/API/DOM/Manipulating_the_browser_history – Manipulating the browser history https://blog.twitter.com/2012/implementing-pushstate-for-twittercom – Implementing pushState for twitter.com http://vip.wordpress.com/2014/08/28/building-qz-com-full-transcript/ – Building Quartz http://tumbledry.org/2011/05/12/screw_hashbangs_building – Screw Hashbangs: Building the Ultimate Infinite Scroll
OPCFW_CODE
<meta content="text/html; charset=utf-8" http-equiv="Content-Type"> <meta content="OpenWebMail 3.00 Beta4 build 620 20111210 rev 620" name="Generator"> I using Win7 and ClassicShell 3.6.2 Most settings working fine, EXCEPT I have some personalised Menu shortcuts located in C:\ProgramData\Microsoft\Windows\Start Menu\Sub Menu After clicking at Start, Sub Menu, it takes nearly 15 seconds for Sub Menu items to appear. This is way too slow for satisfactory daily experience. I also use MicroSoft Security Essentials. I have included the following entries in Settings, Excluded files and locations C:\Program Files\Classic Shell C:\Program Data\Microsoft\Windows\Start Menu But still the Classic Shell is too slow. Any tips or clues for how to fix ? [Edited to include double backslashes for legibility.] I have also noticed this. I am running 3.62, Windows 7 64bit and MSE. The problem seems to happen the first time I access a submenu. The menu just locks up for the initial reading of the items. Is this a caching issue? Is it possible to temporarily disable MSE to see if it is even related? If you are experiencing slow opening of the sub-menus in Classic Shell 3.6.2, please contact me at firstname.lastname@example.org. I have a hunch what the problem might be and I want to try a few things. The new version 3.6.3 improves the performance of the start menu. Let me know if you still have this problem after you upgrade. Ivo, reporting that v3.6.3 does not suffer the Start Button delays on two of my Win7 64-bit systems. For me it's fixed, good work! I now using 3.6.5 and the slowness problem appears fixed. So many thanks. Is there some nice way to make all the shortcut icons appear in the custom menu ? I'm thinking the icons are not visible because the shortcuts (symlinks?) point to executables located across the lan. Running 3.6.3 under Windows 8 Pro x64. I have a folder that contains 11 shortcuts to my favorite programs. When CSM is started the first time it takes a couple of seconds for my shortcuts to 'flyout' when I hover on the folder. After this they 'flyout' fast. This is the only folder that does this. Things like Control Panel, Administrative Tools, Programs, Apps, etc all 'flyout' immediately. Any way to avoid this delay when CSM first starts? What kind of programs are they? Are they on a local machine or on the network? They are local on an SSD drive as a matter of fact. The programs are mostly system utilities like CPU-Z, GPU-Z, RealTemp to name a few. I used Customize Start Menu to place the custom folder on the rightmost column of my 2 column menu. Is there an option under Edit Menu Item that would cause my shortcuts to load faster? Is it slow only the first time after you log in, or also becomes slow after you haven't used the start menu for a while? UPDATE: I've been using version 3.6.3 for a couple of days now, and it is running sweetly with none of those long pauses -- or even short pauses (it opens pretty much instantaneously). It's only slow the first time after I start up Windows. From then on, until the next restart/logon, it is slow again the first time. When I fired up my machine this morning hovering over my folder took about 5 seconds before the menu with my programs appeared. I wonder if the delay is due to CS grabbing the program icon from the different folders that contain the exes? Perhaps the icons need to be pre-cached in this case. The start menu precaches icons from the Start Menu folders and the Control Panel. Folders that are manually added to the start menu are not precached. That's because they can be very large - like Computer. One thing you can do is place that folder somewhere under Programs. You can still have a direct link to it in the main menu. Did what you said and no more delay. Do you think an option to cache icons for custom folders is needed? No, I don't want to do that. Some of the custom folders can contain a lot of items. There is no way to know in advance before I start caching them. I am looking into ways to get the menu to open faster even before icons are loaded. Log in to post a comment. Sign up for the SourceForge newsletter: You seem to have CSS turned off. Please don't fill out this field.
OPCFW_CODE
Jim Brisimitzis / July, 2018 How do you get developers’ attention? Jim Brisimitzis, General Manager of Microsoft for Startups shares his experience working with the developer and startup community. In this video, Jim explains how he defined Microsoft’s position in the cloud market and what they did differently to attract developers. How do you get developers’ attention? There’s a lot of demand for developers’ attention today. In the startup world, make no mistake, developers are coming together to build the next thing. Whether it’s the Yo app which is a famous little app that got started here not too long ago, or full-fledged companies like Pinterest, Facebook or Twitter. They’re developing for a purpose. They are trying to create a service, a new product, or a new platform. When we started our journey about three years ago we were less interested in trying to position developer tools or platforms and more interested in what their journeys were. We tailored everything we did around the startup’s individual journey. Ultimately, what we wanted them to be was an active participant in our ecosystem, not to just run our cloud platforms, but to be of value to our vast partners and customers that Microsoft has reached into. Frankly, make our products better by providing valuable product insight and feedback that would help our cloud platform and our developer tools and everything else. What we have found is that by focusing on them and not on us, we actually got further along in our journey and got better insights than just trying to peddle. I hate to say peddle, but to push developer tools or platforms. They’re getting a lot of that from a lot of our competitors already, and so we wanted to be different. Two and a half years ago, credit goes to Trace who is on my team out of New York City. She heads up our East Coast Operations and focuses on New York. She made a bet on a couple of early industries that are actually paid off really well for us. The first that she made a bet on was in genomics. We tailored a specific package and partnered with Microsoft for search. We went after a set of what we consider to be important early stage genomics startups. We were interested in them because we had a set of services that were coming to the Azure platform primarily powered by GPU services. But we also had this amazing research that was coming out from Microsoft for Search that was highly applicable to what the startups were doing in the genomics space. That then led to another deep dive into the financial sector, in the rental services, and now more pointedly she’s been pursuing blockchain as a recent endeavor. What we have done differently there is that we have decided, hey, there’s an emerging bubble of really cool innovations and we want to be part of that. Let’s go and identify those and jump in. Genomics being one example and blockchain being another. Trace also led quite successfully over the last three years, a retail showcase of our startups. It’s a startup showcase that we host at NRF, the National Retail Federation, every January in New York City. That’s an example of us bringing a set of curated startups, startups that are solving real problems for customers who are attending these events. What we’ve found is that the matchmaking is near perfect. Most every startup has either got a deal or is in discussions to do business with customers from those engagements. Something I took my team up to speed on is that we took a very critical view of where Microsoft was. We wanted to be humble about where Microsoft was as an ecosystem player in the startup world. We also wanted to be aware of some of the challenges that we would face. When I look at how we compare to AWS or Google, at a cloud technology platform level you can argue that there are some symmetries across each and every one of them. AWS may have better services, but we will catch up. We may be further ahead of them and they will catch up. The same goes for Google. What we wanted to do is really to get at where we felt that some of our competitors were doing better and where I think AWS has done a good job is one, they were out in the cloud market first. Because they were out in the cloud market first, they were able to build a community. We didn’t have that, not at the scale that Microsoft needed. So I had to push my team very politely to actually start building a community, so we thought – how do we do that? We can hang out at the Y-combinator and 500 startups all day long and do office hours and do all that work, but what do we do beyond that? So I said, we’ll take them out for dinner. I am a European, my parents immigrated from Greece to Toronto. I grew up there and a lot of what we did on the weekends was the community. My family or extended family got together. Long story short is that we started something called CTO supper clubs or CEO supper clubs where we would invite a select number of our startups to get together and the brilliance that came from that wasn’t the meal or people coming together it was the fact that startups were collaborating among themselves and helping each other. That’s the best form of help that you could create. That was something at the time that Microsoft was lacking we just didn’t have that vast community that AWS was enjoying. That’s one thing that we did differently, and we still do those today. We bring a few startups together and there is no agenda and we’ll just meet at a restaurant. We are selective about who we bring in because we want to make sure that everyone is getting something from that dinner. It’s not that we are trying to be exclusive. We want to make sure that people that are showing up actually can get value from the other people who are showing up as well. It has proved to be a very successful way of building a community for us. Beyond that when startups go off after those dinners they are actually, to our benefits, building communities off on the side helping us in our endeavors. That was one big thing that we did early on and continue to still do in terms of building community. So much of what we try to do is to be orthogonal to what our competitors are trying to do which is to always push cloud, cloud, cloud, and pick their platform. We have never really done that and quite successfully we have done extremely well in exactly not doing that.
OPCFW_CODE
What is Proper Logging? Having a proper logger is essential for any production application. In the Java world, almost every framework automatically pulls in Logback or Log4J, and libraries tend to use SLF4J in order to be logger agnostic and to wire up to these loggers. So, I had to set out to see how to do similar logging in python. While it can get fancier, I think the following things are essential when setting up a logger; so they were what I was looking for: - It should be externally configured from a file that your operations team can change. - It should write to a file automatically, not just console. - It should roll the file it writes to at a regular size (date or time rolling on top of that can be beneficial too; but the size restriction ensures you won’t fill up your disk with a ton of logs and break your applications). - It should keep a history of a few previous rolled files to aid debugging. - It should use a format that specifies both the time of the logs and the class that logged them. On top of these, obviously we must be able to log at different levels and filter out which logs go to the file easily. This way, when we have issues, operations can jack up the logging level and figure out what is going wrong as needed. How Do We Do it in Python 3? It turns out that Python actually has a strong logging library built into its core distribution. The only extra library I had to add to use it was PyYAML, and even that could have been avoided (Python supports JSON out of the box and that could be used instead, but people seem to prefer YAML configuration in the community). In the place where your app starts up, write the following code. Note that you have to install the PyYAML module yourself. Also, this expects the “logging.yaml” to be in the same directory as the startup code (change that if you like though). We’ll show the “logging.yaml” content lower. # Initialize the logger once as the application starts up. with open("logging.yaml", 'rt') as f: config = yaml.safe_load(f.read()) # Get an instance of the logger and use it to write a log! # Note: Do this AFTER the config is loaded above or it won't use the config. logger = logging.getLogger(__name__) logger.info("Configured the logger!") Then, when you want to use the logger in other modules, simply do this: logger.info("Using the logger from another module.") Of course, you just have to import logging once at the top of each module, not every time you write a log. This code uses “logging.yaml” which contains the following settings. Note that: - It defines a formatter with the time, module name, level name, and the logging message. - It defines a rotating file handler which writes to my.log and rolls the file at 10MB, keeping 5 historic copies. The handler is set up to use our simple format from above. - The “root” logger writes to our handler and allows only INFO messages through. - The handler is set to DEBUG, so if the root logger is increased to DEBUG during an investigation, it will let the messages through to the log file. Here is the “logging.yaml” example file: # Define format of output logs (named 'simple'). format: "%(asctime)s - %(name)s - %(levelname)s - %(message)s" # Create rotating file handler using 'simple' format. maxBytes: 10485760 # 10MB The code and YAML for this was adapted from this very good blog which I recommend reading: https://fangpenlin.com/posts/2012/08/26/good-logging-practice-in-python/.
OPCFW_CODE
Serial Peripheral Interface (SPI) Programming of EEPROM / Flash / MCU using SuperPro IS01 SuperPro IS01 Programmer supports high-speed programming of SPI compatible serial EEPROMs and Flash memory devices. The Serial Peripheral Interface (SPI) programmer (Superpro IS01 or Gang ISP programmer SuperPro IS03) provides fast programming of any SPI memory device by controlling the SPI bus signals directly through a dedicated high-speed SPI interface on the programmer. User can erase, program, verify and read content of SPI EEPROM and Flash memory devices. The programming operation steps are as follows: - Search chip part numbers in the ISP programming software to view operation hint. - Connect signal lines (including GND) for corresponding interfaces of the ISP programmer with the target board. - If the “mass production" function is to be used, TPIN and TPOUT signal lines should also be connected. - It is recommended that the target board power is supplied independently, especially for target boards with high power consumption and multiple power systems. Otherwise, VCC should be connected if power is supplied by the ISP cable. - Switch on the target board independent power supply. - Operate erase, blank check, and program. Verify functions from the software screen. According to the device information box from the software, signal connections should comply with the table below. Following is an example pinout of SPI EEPROM: The in-system programmer and target system need to operate with the same reference voltage. This is done by connecting ground of the target to ground of the programmer. The target AVR microcontroller will enter Serial Programming mode only when its reset line is active (low). When erasing the chip, the reset line has to be toggled to end the erase cycle. To simplify this operation, the target reset is controlled by the In-System Programmer. When programming the AVR or other SPI chips in Serial mode, the In-System Programmer supplies clock information on the SCK pin. This pin is always driven by the programmer, and the target system should never attempt to drive this wire when target reset is active. Immediately after the Reset goes active, this pin will be driven to zero by the programmer. When programming the AVR in Serial mode, the In-System Programmer supplies data to the target on the MOSI pin. This pin is always driven by the programmer, and the target system should never attempt to drive this wire when target reset is active. When Reset is applied to the target AVR microcontroller, the MISO pin is set up to be an input with no pull up. Only after the “Programming Enable" command has been correctly transmitted to the target will the target AVR microcontroller set its MISO pin to become an output. During this first time, the In-System programmer will apply its pull up to keep the MISO line stable until it is driven by the target microcontroller. VCC can be supplied from ISP programmer to the target PCB. This would allow the target to be programmed without applying power to the target externally. Another option is to use external power supply, in which VCC from programmer is left unconnected and board is powered using external power supply. When Designing Hardware Supporting In-System Programming, always connect ground of the target to ground of the In-System Programmer. Do you have any questions?
OPCFW_CODE
What is better, One javascript framework, or multiple frameworks I am an avid user of the YUI framework (http://developer.yahoo.com/yui/). It has its' strengths and weaknesses both performance wise and syntax wise. I have seen a bit of JQuery and I have worked a little with prototype as well but I have stuck mainly to YUI. My question is, is it better to stick with one Javascript library per application, or leverage the abilities of multiple javascript frameworks in your application? I think it is better to use one framework for at least two reasons: 1. Code is easy to maintain because there is no syntax mix. 2. Application loads a little faster and I think should execute little faster. Also, easy to ask for help on forums. You may have to debug your code by yourself if you use a mix of them. Thanks! I had never considered that angle. Asking for others help is definitely easier if you have less variables to deal with Elaborating on Jenea's second point, using a single framework = less bytes to load as you only need to download a single framework's JavaScript resources. More frameworks = More JS = More download time + more browser initialization time (since most JS frameworks have code that must be processed on page load). My guess is that multiple frameworks is better as long as each has its purpose. If I'm building an ASP.Net web application with AJAX functionality, there may be some built-in ASP.Net AJAX Javascript libraries being used automatically that can be combined with JQuery to handle some situations. Alternatively, one could have third-party controls like Telerik's RAD controls that also bring in more Javascript code possibly. The key is to understand what each framework is adding in terms of rolling your own. To clarify, the Telerik RadControls do not use or load a proprietary JavaScript framework. They build natively on the Microsoft Ajax libraries and jQuery. The controls, of course, load JS to provide their richness, but the underlying frameworks are standard. How close do the Telerik controls come to being a framework or are they not viewed from that perspective? I'm asking this as an honest question of where is the line for how much code does it take for something to be a framework, that's all. Sure less frameworks in the same website will make your life easier, so try as you can to use one framework, and if you are going to use more than one, take care from conflicts and redundancy. If i am in your place, i will start searching the framework i have for some plugins and updates, if didn't find will add the new framework. One more point: don't panic from using more than one framework, the big and famous frameworks such as jquery has its implementations to solve conflicts and work side by side with other javascript libraries Yes, YUI version 3 will be designed so it stays completely seperate from any other javascript libraries out there. I would agree with the answer given that currently its probably a good idea to keep frameworks seperate, but it seems like more and more libraries are being upgraded to allow developers to use multiple libraries in the same app. So we'll see as these newer versions come out if using two or more libraries remains a bad idea. Sometimes plugins are as big as another library. I think there may be times where it's worth it to bring in another library which has cleanly implemented a feature that you otherwise need a plugin for. Also... if you use 2 diffrent frameworks at the same time, some functions in one framework could override an function in the other framework, and make ugly conflicts.. e.g the $() could be implemented in diffrent ways, and make something crash, if other functions of the framework is using it. (and they sure do!) thats why jquery has compatibility mode I think its better to use 1 framework in your development, for consistency of API and loading speed. the problem sometime is no framework is comprehensive enough to have all of our development needs. This is just came into mailbox, their advertisement saying it's a comprehensive framework, with plenty of widget: grid with grouping, charts, forms, tab, fields and so. I haven't play it long, but it seems very promising. check here
STACK_EXCHANGE
This is done using the Sound and Video Game Controls tool Intel drivers of the Windows Control Panel. Select, Not This Time when prompted to connect to Windows Update. To open Device Manager, press theWindows key + X and selectDevice Manager. This modem appears in nearly all HP Pavilions made since 2001. 2 – You must subscribe to Call Waiting service with your phone company in order to use the "Modem On Hold" enhancement. Can you please suggest me some way to do this. I am new to windows programming but have programming experience on linux platform. First, follow the stepsoutlined to obtain your modem’s call log for an incoming call. If the "Select Language" screen appears, select the language you wish to use, and then click "OK". Exploring Fundamental Factors Of Driver Updater PhoneTools software lets you send and receive faxes and is included as an optional piece of software for your convenience. When the Update Device Driver Wizard or Add New Hardware Wizard detects the modem, insert the USRobotics Installation CD-ROM into your CD-ROM drive. Double-click Phone and Modem, then select the Modems tab. If you don’t see Phone and Modem, use the search bar at the top right or switch to "small icon" view. Right-click on the Start menu button, then select Control Panel. I too want to do the same process using c or batch file, Please refer this link for my post on this stackoverflow.com/questions/ /api-to-create-a-new-modem ,but I didnt find any solutions yet. Factors Of Driver Updater Revealed - See the driver definition for further information and related links. - If you have a laptop, follow the manufacturer’s instructions for installing a new graphics card in the machine. - When you finish making your printer driver settings, click OK to apply the settings, or click Cancel to cancel your changes. - Under “Display adapters”, right-click your graphics card and select “Properties”. IF YOU ARE INSTALLING WINDOWS XP and you are having trouble with the modem, see Windows XP & Modems. If your computer has access to the Internet via another adapter, you can also search the available driver automatically from the Internet. In COM Port, click on the drop-down menu and select the correct one from the list. A. Plug the modem into a USB port on Device ManageR’s host system. Highlight your modem by clicking its name and then click the Remove button. This name displays when you view all of your Internet connections. Information in this article applies to Windows 8. It may vary slightly or significantly with other versions or products. Click the “View” menu in the Device Manager window and make sure “Devices By Type” is selected. E-mail Us Submit a question to our support team.Call Us Click here for our telephone numbers and hours of operation.Warranty/Returns Click here for warranty information and to obtain factory servicing. This file is only for use with the models identified. Please refer to the IQFSP Installation Guide for details on how to install IQFSP. Send a test fax to the Microsoft Fax printer driver to ensure it is working. Verify there is a System device named Voice Modem Wave Device as shown. Verify your VTScada voice modem wave bus enumerator to ensure the voice feature of the modem will work correctly with the VTScada Alarm Notification System.
OPCFW_CODE
[arm-servicebus] - RestError: getaddrinfo ENOTFOUND management.azure.com Package Name: @azure/arm-servicebus and @azure/identity Package Version: 6.1.0 (arm-servicebus), 3.1.3 (identity) Operating system: [x] nodejs version: 18.13 Describe the bug We are getting the following error unregularly on some requests (but not all) to service bus management endpoints: RestError: getaddrinfo ENOTFOUND management.azure.com at ClientRequest.<anonymous> (/app/node_modules/@azure/core-rest-pipeline/dist/index.js:1699:24) at Object.onceWrapper (node:events:628:26) at ClientRequest.emit (node:events:525:35) at ClientRequest.emit (node:domain:489:12) at TLSSocket.socketErrorListener (node:_http_client:496:9) at TLSSocket.emit (node:events:513:28) at TLSSocket.emit (node:domain:489:12) at emitErrorNT (node:internal/streams/destroy:151:8) at emitErrorCloseNT (node:internal/streams/destroy:116:3) at process.processTicksAndRejections (node:internal/process/task_queues:82:21) This is not happening locally, but only on deployed instances in AppService. Because of this, some services cannot function as intended. This worked without problems before and seems like some kind of regression? To Reproduce I have not found a way to reproduce this yet. Expected behavior Requests to management endpoints should not fail. Additional Context The ServiceBusManagementClient is being instantiated as follows: new ServiceBusManagementClient( new DefaultAzureCredential(), this.configService.get(CONFIG.AZURE_SUBSCRIPTION_ID), ); with new DefaultAzureCredential() being populated with AZURE_TENANT_ID, AZURE_CLIENT_ID, AZURE_CLIENT_SECRET and AZURE_SUBSCRIPTION_ID environment variables. I updated @azure/identity to 3.3.2, but the error still occurs. getaddrinfo ENOTFOUND management.azure.com makes it look like a DNS issue from the AppService side. I'd recommend also filing a ticket in the Azure portal for the AppService team to take a look. But how come that some requests to that endpoint can be resolved while some cannot? I mean the host doesn't change for sequent requests, does it? Following up, I reported that issue in the Q&A forum for azure: https://learn.microsoft.com/en-us/answers/questions/1399301/app-service-instances-sometimes-cannot-resolve-hos Also, it seems like someone has the same issue, but that thread was left unsolved: https://learn.microsoft.com/en-us/answers/questions/1390460/facing-issues-with-dns-not-resolving-sometimes-and One possible guess is that you have requested too frequently which triggered some kind of rules in the firewall. That'd be a problem. Is there any way to find out more about rate limits? Evaluating that the error originates from host resolving, do you think this is rather related to dns rate limits? We do not use a custom dns, so I guess the rates for an azure provided dns apply, is this correct? That would be rough, because it rates at >1k queries per second. Still doesn't really make sense to me though, because we tested it in an instance that only us (devs) can access. And that should result in far less than 1k queries/second. I am not sure if the limit is calculated per each app service instance, maybe how frequent your service calls the (servicebus) apis. But if you suspect there's a network availability issue I can think of two ways for you to do trouble shooting. redeploy your app service instance to a different region see if the problem still exists. try to find out a successful returned ip address with nslookup or something and add it into your app service instance's machine /etc/hosts if that vm is also hosted by you. This could be risky for production as the ip address could be unavailable in the future. If you found the issue still exists after trying all these, there might not be a network availability issue. Rate limit is most likely to be the cause.
GITHUB_ARCHIVE
While getting ready for my interview with Semil Shah on TechCrunch TV tomorrow, I’ve been thinking about the key enablers of the “Internet of Things.” I see three key enablers for technology applications that connect the real world to the Internet (aka the Internet of Things): - Smartphones. The massive, unprecedented consumer adoption of two major mobile computing platforms (iOS and Android) puts enormous processing power in the hands of end-users worldwide. Consumers can use these devices to discover and interact with the world (and nearby devices) in their immediate vicinity in powerful new ways. - Sensors. Moore’s Law has driven the cost of sensors into the ground, at a similar rate to CPU power and speed. Every smartphone today packs enormous sensor capabilities in terms of location, sound, accelerometers, imaging and touch. Instagram and Foursquare are valuable examples of startups that have leveraged sensor technology in ways that appeal to mainstream consumers. - Actuators. These technologies are advancing just as rapidly as sensors and CPUs, especially in terms of low cost and high reliability. But startups haven’t fully realized their potential - yet. Actuators allow control, not just information transfer. Today, no one is talking about the importance of actuators. But these are the devices that translate computation into action - also known as robots - which is where startups are about to make massive amounts of money. The next Facebook/Google/Microsoft will be powered by actuators. Look at the Nest Labs thermostat, for example. It senses ambient temperature, monitors occupancy through motion sensors, and records user input when the ring is turned clockwise or counterclockwise. But the magic in the device is the fact that it controls the HVAC system. Without modern solid-state voltage actuators, this wouldn’t be possible in a such a sexy device. For another great example, just go to the toy store and pick up a radio-controlled helicopter or airplane. For $50, anyone can buy a personal flying drone with hours of HD video recording built in. Military-scale spying for a fraction of the cost. This type of $50 aircraft wouldn’t exist without super-cheap, state-of-the-art actuators to control propeller speed and control surfaces for long durations and difficult conditions (ie handled by kids). The next great example, about to happen, will be the fully automated car. Every major subsystem in modern automobiles is in the process of being replaced by drive-by-wire systems. Manual transmissions are becoming obsolete. By marrying these advances in actuation technology, with massive recent advances in computer vision, the self-driving car is about to revolutionize transportation forever. Low-cost, reliable actuators represent the final building block of robotics - which is on the cusp of a huge revolution. To date, robotics has been the realm of HAL and the Roomba. But once the self-driving car hits the streets in volume, robotics will finally enter the mainstream. And it couldn’t happen without the recent, incredible advances in actuators.
OPCFW_CODE
OK. Here's a first shot -- a very lightly poached version of the FAQ screed. 1. The FAQ points at a broken URL -- http://language.perl.com/news/y2k.html. Is a working alternative available? 2. Whatever README.Y2K ends up saying should probably be pasted back into 3. "-DCFLAGS=-DPERL_Y2KWARN" is a somewhat unfriendly Configure option. Support for a simple -Dy2kwarn would be preferable. 4. Whatever the Configure option ends up as, it should be documented in --- /dev/null Wed Sep 15 10:23:24 1999+++ README.Y2K Wed Sep 15 11:58:29 1999@@ -0,0 +1,46 @@+The following information about Perl and the year 2000 is a modified+version of the information that can be found in the Frequently Asked+Question (FAQ) documents.++Does Perl have a year 2000 problem? Is Perl Y2K compliant?++Short answer: No, Perl does not have a year 2000 problem. Yes,+ Perl is Y2K compliant (whatever that means). The+ programmers you've hired to use it, however, probably are+ not. If you want perl to complain when your programmers+ create programs with certain types of possible year 2000+ problems, a build option allows you to turn on warnings.++Long answer: The question belies a true understanding of the+ issue. Perl is just as Y2K compliant as your pencil+ --no more, and no less. Can you use your pencil to write+ a non-Y2K-compliant memo? Of course you can. Is that+ the pencil's fault? Of course it isn't.++ The date and time functions supplied with perl (gmtime and+ localtime) supply adequate information to determine the+ year well beyond 2000 (2038 is when trouble strikes for+ 32-bit machines). The year returned by these functions+ when used in an array context is the year minus 1900. For+ years between 1910 and 1999 this happens to be a 2-digit+ decimal number. To avoid the year 2000 problem simply do+ not treat the year as a 2-digit number. It isn't.++ When gmtime() and localtime() are used in scalar context+ they return a timestamp string that contains a fully-+ expanded year. For example, $timestamp =+ gmtime(1005613200) sets $timestamp to "Tue Nov 13 01:00:00+ 2001". There's no year 2000 problem here.++ That doesn't mean that Perl can't be used to create non-+ Y2K compliant programs. It can. But so can your pencil.+ It's the fault of the user, not the language. At the risk+ of inflaming the NRA: ``Perl doesn't break Y2K, people+ do.'' See http://**Need_working_URL_here**/y2k.html for a+ longer exposition.++ If you want perl to warn you when it sees a program which+ catenates a number with the string "19" -- a common+ indication of a year 2000 problem -- build perl using the+ Configure option "-DCFLAGS=-DPERL_Y2KWARN". (See the+ file INSTALL for more information about building perl.) -- Migrated from rt.perl.org#1480 (status was 'resolved') Searchable as RT1480$ The text was updated successfully, but these errors were encountered:
OPCFW_CODE
Timer Control - VB.Net The Timer Control holds significant significance in both client-side and server-side programming, as well as its application in Windows Services. This versatile control enables precise control over the timing of actions without the need for concurrent thread interaction. The Timer Control proves invaluable when it comes to managing and scheduling events in various programming scenarios. It empowers developers to set specific intervals or time delays for executing tasks, allowing for the automation of actions within a defined time frame. In client-side programming, the Timer Control serves as a valuable tool for enhancing user experience and interactivity. It enables the implementation of timed events, such as updating data, refreshing content, or triggering specific actions based on predefined intervals. This control facilitates a seamless and dynamic user interface, providing timely updates and responsiveness to user interactions. On the server-side, the Timer Control plays a pivotal role in scheduling and executing background tasks or server processes. It allows developers to schedule routine tasks, perform regular maintenance, or trigger specific actions at predetermined intervals. By utilizing the Timer Control, server-side applications can efficiently manage and optimize resource utilization while ensuring timely execution of essential operations. Use of Timer Control The Timer Control proves to be a versatile tool in numerous scenarios within our development environment. When there is a need to execute code at regular intervals continuously, the Timer control comes into play. Additionally, it serves various purposes such as initiating processes based on fixed time schedules, adjusting animation graphics' speed over time, and more. The Visual Studio toolbox conveniently offers a Timer Control, allowing for effortless drag-and-drop integration onto a Windows Forms designer. During runtime, the Timer Control operates as a background component without a visible representation, ensuring seamless functionality. How to Timer Control ? The Timer Control offers precise program control at various time intervals, ranging from milliseconds to hours. It grants us the ability to configure the Interval property, which operates in milliseconds (where 1 second equals 1000 milliseconds). For instance, if we desire an interval of two minutes, we can set the Interval property value to 120000, which represents 120 multiplied by 1000. The Timer Control starts its functioning only after its Enabled property is set to True, by default Enabled property is False. The following program provides an example of utilizing a Timer to display the current system time in a Label control. To accomplish this, we require a Label control and a Timer Control. In this program, we observe that the Label Control is updated every second since we set the Timer Interval to 1 second, equivalent to 1000 milliseconds. To implement this functionality, we begin by dragging and dropping the Timer Control onto the designer form. Next, we double-click the Timer control and set the Label control's Text property to the value of DateTime.Now.ToString(). This ensures that the Label control reflects the current system time accurately. Start and Stop Timer Control The Timer Control Object provides us with the flexibility to determine when it should start and stop its operations. It offers convenient start and stop methods that allow us to initiate and halt its functionality as needed. By invoking the start method, we trigger the Timer Control to commence its operations and execute the designated tasks based on the specified interval. Conversely, by invoking the stop method, we instruct the Timer Control to cease its operations and halt the execution of associated actions. These start and stop methods offer precise control over the timing and duration of the Timer Control's function within our applications. Here is an example that demonstrates the usage of start and stop methods with the Timer Control. In this particular scenario, we want the program to run for a duration of 10 seconds. To achieve this, we start the Timer in the Form_Load event and subsequently stop it after 10 seconds have elapsed. The Timer's Interval property is set to 1000 milliseconds (1 second), causing the Timer to trigger its Tick event every second during runtime. As a result, the Timer will execute its Tick event 10 times, aligning with the desired 10-second runtime duration.Full Source VB.NET The Timer Control serves as a fundamental component in both client-side and server-side programming, as well as its utilization in Windows Services. It empowers developers to control the timing of actions and events without the need for external thread interaction, enhancing the performance, interactivity, and reliability of applications in various contexts.
OPCFW_CODE
Bitcoin Embassy Amsterdam’s own Slack Channel Members please join our main Slack channel now and start interacting with our other members of the Bitcoin Embassy Amsterdam! Access to the slack channel is reserved for Full, Pro and Corporate Bitcoin Embassy members. How to join our Slack Channel? Full, Pro and Corporate members can request access using the slack button in ‘My Embassy’ Once registered you will find various other (sub) channels for you to join as listed below. We will be gradually extending our sub-channels depending on the needs and desires of our members: “This channel is for team-wide communication and announcements for all registered Members on our Embassy Slack.” “A place for non-work-related flimflam, faffing, hodge-podge or jibber-jabber you’d prefer to keep out of more focused work-related channels.” “Discussions, suggestions and introductions to extend our Embassy and reach out to others: individuals, experts and other organizations in The Netherlands and other countries.” “Buy low, sell high! This is the channel for all your bitcoin related trading & price chat. For current bitcoin prices, send a message that starts with !cgy”. On this channel you will find the Bitcoin prices! “Encouraging participation and networking between and about women in IT, Bitcoin and Blockchain technology sphere. Facilitating discussions on all sorts of topics and future projects etc. Primarily intended for women to join.” “Discussions, resources and educational materials on Blockchain and Bitcoin. Information on available Blockchain (online) courses: dates, places and subjects.” “Discussions etc. for practical Action Plans to reach students to increase Bitcoin & Blockchain technology awareness, participation in related communities and the Bitcoin Embassy Amsterdam in particular.“ “Discussions on Bitcoin & Blockchain. Legal aspects and compliance requirements in various EU member states. Channel for lawyers and compliance officials and officers.” “To facilitate our Bitcoin / Blockchain friends from Malta. Promote their activities, meetings and projects. For networking and collaboration with others in or outside Malta. For discussions and exchanging of ideas.” What is Slack and what are its advantages? Slack is a platform for team communication: everything in one place, instantly searchable, available wherever you go. Like many other companies and organizations on the web we used Slack to create various “open channels” for projects, (sub) groups and relevant (discussion) topics in our Embassy. These Channels include messages, files & comments, inline messages & video, rich link summaries and integration with services like Twitter, Dropbox & Google Drive. Through our Embassy Slack you can remain perfectly in sync and stay up to date with the latest news and (group) discussions there etc. on all your devices. How to enable Slack on all your Devices? Slack is an online (hosted) platform but you can download or install the slack desktop app for PC or Mac or get the slack app for your favorite iOS or Android device so you stay in sync easily. Slack creates a more effective and easier way of staying in touch and greatly reduces internal email as desktop & mobile messaging, file sharing and notifications are collected in one place. All can stay in touch and see how conversations are developing, contributions of various persons etc. It makes everyone more engaged and more productive! You can even add your Files and make them available to other members. We hope to welcome you on our Slack Channel soon!
OPCFW_CODE
So, what does that new AI change exactly? I had similar problems to you - this is my workaround. 1.Find ThirdAge.exe file (search) and put a Shortcut on your desktop. 2.Run Launcher and then run Thirdage.exe from the shortcut. 3, To get rid of M2TW music press Alt-Tab and close the left most icon in your sidebar(has picture of the launcher) Should now get TATW loading. I note that Tharbad has been split into North and South. Do I need both for the Arnour emergence? Last edited by StealthFox; June 07, 2012 at 01:48 PM. Reason: Double Post Is this patch 3.2 different from the patch 3.2 in hear http://www.twcenter.net/forums/showthread.php?t=500418? Oh, okay. I've seen that one several times. Now... I feel really dumb for having to ask this instead of just understanding it properly based on the simple instructions I've been given, but I'm going to write down what I've been doing so someone can point to whichever step is confounding me and say, "Well there's your problem, you dummy! Not that way; do it like this!" and save me from myself. 1. I install 3.0 to the following location: C:\Program Files (x86)\Steam\steamapps\common\Medieval II Total War 2. I copy the contents of the Third_Age folder into what used to be the Americas folder and 3.0 will play correctly. Except for the launcher apparently not closing when it's told to. I've tried the proposed workaround, but I keep getting an error: something about not being able to find Kingdoms.exe (even when Thirdage.bat is in C:\Program Files (x86)\Steam\steamapps\common\Medieval II Total War which is the same folder as kingdoms.exe Now, step 3 is the part that confuses me. To which of the following paths should I install 3.2? C:\Program Files (x86)\Steam\steamapps\common\Medieval II Total War C:\Program Files (x86)\Steam\steamapps\common\Medieval II Total War\mods C:\Program Files (x86)\Steam\steamapps\common\Medieval II Total War\mods\americas When I install to C:\Program Files (x86)\Steam\steamapps\common\Medieval II Total War the patch does not seem to activate, because versioninfo still says 3.0, and none of the changes described in the changelog appear to be in effect. Also, is there another step involved after this, such as copy-pasting files to another location? I'm sorry for being obnoxious and stupid in not understanding this. Usually, I don't have this much trouble installing mods. Thank you all for being helpful. I don't really "know my way", I read a short guide on the settlements file and changed a few values. I don't know which is for morale, I don't know what is the vanilla one (can't find the vanilla files). And how can I look at the files without installing the mod? If I knew this stuff I wouldn't be asking here. I asked because I thought there was someone who knew the answers to these questions without having to spend hours pouring through half comprehensible game files - it takes just a minute or two to answer if you know it. For the moderator: Merging the post here makes it drown in random comments - this "one thread to rule them all" seems a very ineffective way to handle such questions since they can be very different in nature, and most people don't bother to go through and read every single post, it is much easier to go through a forum and read thread topics. It would be great if this stuff was in the changelog btw. For example, technical issues, praise and critique, has little in common with my thread, as well as several others here. Does it really take up that much server space to have a few extra threads flying about? You know, honestly, if I took the time to look around instead of checking 1 or 2 things and then asking stupid questions, I'd have saved you all from having to read my last several posts. I have now found, read, and followed the installation instructions, and the game is working as it should be. My apologies for wasting the collective time of my fellow users. PS: Don't I feel like a smacked ? 1. Most low-tier units has had their morale (stat mental) reduced by one. This include Gondor militia, Snagas etc and militia archers. Didn't find any changes for elite or medium units. (Gondor infantry, swan knights etc.) 2. I can find no changes in the Settlements file at ALL. I don't know why this file is included in the patch, maybe just to make it compatible with 3.1 (3.0 to 3.1 changes)? Chivalry still sucks (capped at 0.5% growth), health still has 0% growth etc. Corruption has the same value, wages, trade, farming etc. If growth has been changed I guess it is the individual settlement farming levels. 3. No change as far as I can see. 4. Bodyguards costs the same in wages and upkeep AFAIK. Since I approve of the changes in part 1, and of course the adding of more custom battle maps, this patch is a getter. If any could verify or explain anything else here I would appreciate it. As it isn't save game compatible will my save of 3.1 still work with this patch installed (but not have the changes) or will it just cease to function?
OPCFW_CODE
- To erase deleted files beyond recovery on Windows 11 (or 10), use the “cipher /w:DRIVE-LETTER:FOLDER-PATH” or “cipher /w:DRIVE-LETTER:” command. On Windows 11, you can use the “Cipher” tool to wipe out deleted data from the hard drive to make it unrecoverable without formatting the entire storage, and in this guide, I’ll walk you through the steps to use this tool. Cipher.exe is a command-line tool that has been around for a long time in the client and server versions of the operating system. Microsoft designed the utility to encrypt and decrypt data from drives using the NTFS file system. However, you can also use it to overwrite deleted data to prevent recovery. When you delete a file or folder, the system does not immediately remove the data from the hard drive. Instead, it marks the data for deletion and keeps it available until other data overwrites it. It’s why you can recover accidentally deleted data and why it is always best to stop using the device immediately after accidental deletion to improve your chances of recovery using special software. If you have deleted data beyond the Recycle Bin and want to ensure it’s unrecoverable, you can use the Cipher tool in Command Prompt to overwrite it with zeros and ones, making it difficult to recover. In this guide, I’ll outline the steps to use the command-line tool to overwrite deleted data to wipe out the information for the hard drive on Windows 11 (as well as on Windows 10). Use Cipher to overwrite deleted data on Windows 11 To wipe out deleted data from the drive with Cipher on Windows 11 (or 10), use these steps: Open Start on Windows 11. Search for Command Prompt, right-click the top result, and choose the Run as administrator option. Type the following command to securely erase deleted data and press Enter/p> In the command, replace “DRIVE-LETTER” with the drive letter with the deleted content and “FOLDER-PATH” with the path to the folder to completely erase from the hard drive. For example, this command uses Cipher to wipe out the “aws-rclone-test” folder that I previously deleted: Type the following command to securely erase the free space that may contain deleted data information and press Enter: In the command, replace “DRIVE-LETTER” with the drive letter of the storage you want to wipe out the free space. For example, this command wipes out only the free available space of the “C:” that may contain recoverable data: (Optional) Type the following command to overwrite deleted data with multiple passes and press Enter: cipher /w:DRIVE-LETTER: /p3 In the command, replace “DRIVE-LETTER” with the drive letter of the storage you want to wipe out the free space. You can also change “3” for the number of passes you wish to use. The greater the number, the more time it will take to complete the process. Once you complete the steps, Cipher will overwrite the deleted data, making it very difficult for anyone to use recovery software to reconstruct and restore the files and folders from the hard drive. Cipher only overwrites free available space where deleted data may still reside. It doesn’t wipe out the existing and accessible data. You can also run this tool in the “C:” drive where the operating system is installed.
OPCFW_CODE
Contributors mailing list archives Re: Proposal for new workflow, incorporating "Optimistic Merging"by Thank you Luis, OCA uses the simpler Feature Branch workflow, where we have a single stable branch. The GitFlow ellaborates on this separating this reference branch into three: develop, release and master. There is a nice comparison at the Atlassian website: In that perspective, I can see that my initial proposal ends up being something in between, using two branches: develop and release/master. But even that "simplified Gitflow" faces an issue: Contributions to the OCA are mostly opportunistic. We don't have a dedicated team, systematically working on the codebase. Instead people share code they find that can be useful to others. But then they need to address first their bill-paying customers, and often can't find the time or motivation to improve the code according to the review feedback. That's the main reason why PRs are stalled - lack of follow up from the author. So a "develop" branch would quickly become bloated with zombie code. With Feature Branch, that code just hangs in a separate branch, and can eventually picked up by someone later. The point with my reviewed proposal is to instead leverage collaboration on feature branches: - have them visible (this is achieved with open PRs), and - foster incremental collaboration in those feature branches, that today happens seldom. For the second point, some action items could be: - preferring breaking up feature additions into multiple PRs (instead of requesting the authors to expand the current PR). - providing easy documentation on how to make a PR on a PR - eventually providing some git wrapper to make it easier to fetch PR branches Citando Luis Felipe Miléo <email@example.com>: Hi,+1There is a tool that with some adaption can even automate the process merging and creating pull requests in OCA. https://datasift.github.io/gitflow/Available subcommands are: init Initialize a new git repo with support for the branching model. feature Manage your feature branches. release Manage your release branches. hotfix Manage your hotfix branches. push Push the changes from your current branch (plus any new tags) back upstream. pull Pull upstream changes down into your master, develop, and current branches. update Pull upstream changes down into your master and develop branches. version Shows version information. - When a developer creates a new feature / hotfix a new branch is created on your fork. This helps other contributors know that there is someone working in a particular activity and interact with it;- When the feature is completed is made an automatic merge the branch beta (or Develop); - When a hotfix is finished is made a merge both branch beta as the master (8.0 / 9.0);Best Regards.- Luis Felipe Miléo Open Source Integrators, Daniel Reis
OPCFW_CODE
Unified Access Gateway for end-user computing products and services needs high availability for Workspace ONE and VMware Horizon on-prem deployments. However, using third-party load balancers adds to the complexity of the deployment and troubleshooting process. This solution reduces the need for a third-party load balancer in the DMZ front-ending Unified Access Gateway . This module will guide you through the deployment of two Unified Access Gateway appliances and the setup of High Availability in both. The deployment will use PowerShell script, and the setup of High Availability will be done through the administration console. The Unified Access Gateway will be deployed with two NICs, one facing internet and the second one dedicated to Management and Backend access. This manual covers Unified Access Gateway 3.4 deployment in vSphere 6.5 U1. Virtual IP address and Group ID Unified Access Gateway requires an IPv4 virtual IP address and a group ID from the administrator, to assign the virtual IP address to only one of the appliances (nodes) in the cluster that is configured with the the same Virtual IP address and Group ID. When the Unified Access Gateway holding the virtual IP address fails, the Virtual IP address gets reassigned automatically to one of the nodes available in the cluster. The high availability and load distribution occurs among the nodes in the cluster that is configured with the same Group ID. How Unified Access Gateway High Availability distribute the traffic Multiple connections originating from the same source IP address are sent to the same Unified Access Gateway that processes the first connection from that client for Horizon and web reverse proxy, for Per-App Tunnel and Content Gateway connections are stateless and session affinity is not required, least connection algorithm is used to distribute the traffic. Unified Access Gateway high availability supports 10,000 concurrent connections in the cluster. Different Unified Access Gateway services require different algorithms. - For VMware Horizon and Web Reverse Proxy - Source IP Affinity is used with the round robin algorithm for distribution. - For VMware Tunnel (Per-App VPN) and Content Gateway - There is no session affinity and least connection algorithm is used for distribution. Methods that are used for distributing the incoming traffic: - Source IP Affinity: Maintains the affinity between the client connection and Unified Access Gateway node. All connections with the same source IP address are sent to the same Unified Access Gateway node. - Round Robin mode with high availability: Incoming connection requests are distributed across the group of Unified Access Gateway nodes sequentially. - Least Connection mode with high availability: A new connection request is sent to the Unified Access Gateway node with the fewest number of current connections from the clients. All of the following pre-requisites are already installed for this Module, the following information is just for your reference. To deploy Unified Access Gateway using PowerShell script, you must use specific versions of VMware products. - vSphere ESX host with a vCenter Server. - PowerShell script runs on Windows 8.1 or later machines or Windows Server 2008 R2 or later. - The Windows machine running the script must have VMware OVF Tool command installed. - You must install OVF Tool 4.3 or later from https://www.vmware.com/support/developer/ovf/ - Download a version of UAG virtual appliance image from VMWare. This is an OVA file e.g. .euc-access-point-3.4.X.X-XXXXXXXXXXX.ova. Refer to VMware Product Interoperability Matrixes to determine the version to download. - Download the correct UAG PowerShell script version, it's named uagdeploy-VERSION.ZIP file and extract the files into a folder on your Windows machine. The scripts are host at https://my.vmware.com under Unified Access Gateway product. - You must select the vSphere data store and the network to use. Starting with version 3.3, you can deploy Unified Access Gateway without specifying the netmask and default gateway settings in Network Protocol Profiles(NPP). You can specify this networking information directly during deployment of your Unified Access Gateway instance.
OPCFW_CODE
Of Mice and Elephants [This post has been written by Martin Casado and Justin Pettit with hugely useful input from Bruce Davie, Teemu Koponen, Brad Hedlund, Scott Lowe, and T. Sridhar] This post introduces the topic of network optimization via large flow (elephant) detection and handling. We decompose the problem into three parts, (i) why large (elephant) flows are an important consideration, (ii) smart things we can do with them in the network, and (iii) detecting elephant flows and signaling their presence. For (i), we explain the basis of elephant and mice and why this matters for traffic optimization. For (ii) we present a number of approaches for handling the elephant flows in the physical fabric, several of which we’re working on with hardware partners. These include using separate queues for elephants and mice (small flows), using a dedicated network for elephants such as an optical fast path, doing intelligent routing for elephants within the physical network, and turning elephants into mice at the edge. For (iii), we show that elephant detection can be done relatively easily in the vSwitch. In fact, Open vSwitch has supported per-flow tracking for years. We describe how it’s easy to identify elephant flows at the vSwitch and in turn provide proper signaling to the physical network using standard mechanisms. We also show that it’s quite possible to handle elephants using industry standard hardware based on chips that exist today. Finally, we argue that it is important that this interface remain standard and decoupled from the physical network because the accuracy of elephant detection can be greatly improved through edge semantics such as application awareness and a priori knowledge of the workloads being used. The Problem with Elephants in a Field of Mice Conventional wisdom (somewhat substantiated by research) is that the majority of flows within the datacenter are short (mice), yet the majority of packets belong to a few long-lived flows (elephants). Mice are often associated with bursty, latency-sensitive apps whereas elephants tend to be large transfers in which throughput is far more important than latency. Here’s why this is important. Long-lived TCP flows tend to fill network buffers end-to-end, and this introduces non-trivial queuing delay to anything that shares these buffers. In a network of elephants and mice, this means that the more latency-sensitive mice are being affected. A second-order problem is that mice are generally very bursty, so adaptive routing techniques aren’t effective with them. Therefore routing in data centers often uses stateless, hash-based multipathing such as Equal-cost multi-path routing (ECMP). Even for very bursty traffic, it has been shown that this approach is within a factor of two of optimal, independent of the traffic matrix. However, using the same approach for very few elephants can cause suboptimal network usage, like hashing several elephants on to the same link when another link is free. This is a direct consequence of the law of small numbers and the size of the elephants. Treating Elephants Differently than Mice Most proposals for dealing with this problem involve identifying the elephants, and handling them differently than the mice. Here are a few approaches that are either used today, or have been proposed: - Throw mice and elephants into different queues. This doesn’t solve the problem of hashing long-lived flows to the same link, but it does alleviate the queuing impact on mice by the elephants. Fortunately, this can be done easily on standard hardware today with DSCP bits. - Use different routing approaches for mice and elephants. Even though mice are too bursty to do something smart with, elephants are by definition longer lived and are likely far less bursty. Therefore, the physical fabric could adaptively route the elephants while still using standard hash-based multipathing for the mice. - Turn elephants into mice. The basic idea here is to split an elephant up into a bunch of mice (for example, by using more than one ephemeral source port for the flow) and letting end-to-end mechanisms deal with possible re-ordering. This approach has the nice property that the fabric remains simple and uses a single queuing and routing mechanism for all traffic. Also, SACK in modern TCP stacks handles reordering much better than traditional stacks. One way to implement this in an overlay network is to modify the ephemeral port of the outer header to create the necessary entropy needed by the multipathing hardware. - Send elephants along a separate physical network. This is an extreme case of 2. One method of implementing this is to have two spines in a leaf/spine architecture, and having the top-of-rack direct the flow to the appropriate spine. Often an optical switch is proposed for the spine. One method for doing this is to do a policy-based routing decision using a DSCP value that by convention denotes “elephant”. At this point it should be clear that handling elephants requires detection of elephants. It should also be clear that we’ve danced around the question of what exactly characterizes an elephant. Working backwards from the problem of introducing queuing delays on smaller, latency-sensitive flows, it’s fair to say that an elephant has high throughput for a sustained period. Often elephants can be determined a priori without actually trying to infer them from network effects. In a number of the networks we work with, the elephants are either related to cloning, backup, or VM migrations, all of which can be inferred from the edge or are known to the operators. vSphere, for example, knows that a flow belongs to a migration. And in Google’s published work on using OpenFlow, they had identified the flows on which they use the TE engine beforehand (reference here). Dynamic detection is a bit trickier. Doing it from within the network is hard due to the difficulty of flow tracking in high-density switching ASICs. A number of sampling methods have been proposed, such as sampling the buffers or using sFlow. However the accuracy of such approaches hasn’t been clear due to the sampling limitations at high speeds. On the other hand, for virtualized environments (which is a primary concern of ours given that the authors work at VMware), it is relatively simple to do flow tracking within the vSwitch. Open vSwitch, for example, has supported per-flow granularity for the past several releases now with each flow record containing the bytes and packets sent. Given a specified threshold, it is trivial for the vSwitch to mark certain flows as elephants. The More Vantage Points, the Better It’s important to remember that there is no reason to limit elephant detection to a single approach. If you know that a flow is large a priori, great. If you can detect elephants in the network by sampling buffers, great. If you can use the vSwitch to do per-packet flow tracking without requiring any sampling heuristics, great. In the end, if multiple methods identify it as an elephant, it’s still an elephant. For this reason we feel that it is very important that the identification of elephants should be decoupled from the physical hardware and signaled over a standard interface. The user, the policy engine, the application, the hypervisor, a third party network monitoring system, and the physical network should all be able identify elephants. Fortunately, this can easily be done relatively simply using standard interfaces. For example, to affect per-packet handling of elephants, marking the DSCP bits is sufficient, and the physical infrastructure can be configured to respond appropriately. Another approach we’re exploring takes a more global view. The idea is for each vSwitch to expose its elephants along with throughput metrics and duration. With that information, an SDN controller for the virtual edge can identify the heaviest hitters network wide, and then signal them to the physical network for special handling. Currently, we’re looking at exposing this information within an OVSDB column. Are Elephants Obfuscated by the Overlay? No. For modern overlays, flow-level information, and QoS marking are all available in the outer header and are directly visible to the underlay physical fabric. Elephant identification can exploit this characteristic. This is a very exciting area for us. We believe there is a lot of room to bring to bear edge understanding of workloads, and the ability for software at the edge to do sophisticated trending analysis to the problem of elephant detection and handling. It’s early days yet, but our initial forays both with customers and hardware partners, has been very encouraging. More to come.
OPCFW_CODE
Hi, I’ve been trying to set up a lobby system for an RTS game. The basic idea is, one player launches the lobby as a host, and other clients join the lobby. Everything has been working fine until i started adding GUI components. Adding the ‘ready’ buttons worked, but when i tried adding SelectionBoxes (from TonegodGUI), strange things started to happen. When I launched one instance of the game and created the lobby as a host, the SelectionList’s worked fine. When another instance connected the game the response time for all gui elements in the LobbyAppState was around 5 sec. When all clients but the host disconnected, the long response time was still there. Only exiting to menu and entering the lobby once again fixed the response time. Sitll if somebody connected the GUI stopped working properly again. I’ve used toneGodGUI’s selectListBox for my lil game’s lobby when it had a multiplayer lobby and had no such problem, so I don’t think it’s related to problems with that control. Could it be that it has an onChange event that runs in circle? Have you tried disabling the listening while you update the list content and then enabling it again? Very sorry if it’s a stoupid idea :D. What do you mean by “in a circle”?? I create the selectionBox in the initialize method and override the onChange method. If i understand correctly it executes only if something changes, and only once. About disabling listening: No, but i believe it will have no effect because updating the list creates the SelectionBox object(when the client enters the LobbyAppState). The is no code at the moment to broadcast the changes made to other clients, so it’s not the network’s fault. Another thing I don’t quite understand is this error i get when creating the selectboxes in the LobbyAppState constructor: kwi 20, 2015 7:03:22 PM com.jme3.app.Application handleError SEVERE: Uncaught exception thrown in Thread[LWJGL Renderer Thread,5,main] BUILD SUCCESSFUL (total time: 8 seconds) If i cut out the addListItem part it runs correctly and shows the selection boxes. When I add it, it shows a null pointer exception in the first IF statement (“if(Integer.parseInt …)”). But then in the error log it points to the addListItem line. I don’t really know what’s going on! It;s the first If statement. I already followed that path earlier. It lead me to the freezing part. Main.client appears to be null as if not yet initialized. But if you look into the link in github, im using a similar for loop to init the CheckBoxes. There i have the same if statement and it throws no null pointer excepion. Th only problem is the addListItem which crashes the game’ EDIT: You can find the whole class LobbyAppState.java in the link in the original post Well, it doesn’t just appear to be… it actually is null… you just have to figure out why. Not sure where you are setting it and there was a lot of code, as I recall. If you are setting it as a connection listener then it might not be set until later… but really, you shouldn’t even be adding your lobby app state until after you are connected… right about the same place you set Main.client I guess.
OPCFW_CODE
import { Model, model } from 'mongoose'; import { Customers } from '.'; import { ICustomerDocument } from './definitions/customers'; import { engageMessageSchema, IEngageMessage, IEngageMessageDocument } from './definitions/engages'; export interface IEngageMessageModel extends Model<IEngageMessageDocument> { createEngageMessage(doc: IEngageMessage): Promise<IEngageMessageDocument>; updateEngageMessage(_id: string, doc: IEngageMessage): Promise<IEngageMessageDocument>; engageMessageSetLive(_id: string): Promise<IEngageMessageDocument>; engageMessageSetPause(_id: string): Promise<IEngageMessageDocument>; removeEngageMessage(_id: string): void; setCustomerIds(_id: string, customers: ICustomerDocument[]): Promise<IEngageMessageDocument>; addNewDeliveryReport(_id: string, mailMessageId: string, customerId: string): Promise<IEngageMessageDocument>; changeDeliveryReportStatus(headers: IHeaders, status: string): Promise<IEngageMessageDocument>; changeCustomer(newCustomerId: string, customerIds: string[]): Promise<IEngageMessageDocument>; removeCustomerEngages(customerId: string): void; updateStats(engageMessageId: string, stat: string): void; } interface IHeaders { engageMessageId: string; customerId: string; mailId: string; } export const loadClass = () => { class Message { /** * Create engage message */ public static createEngageMessage(doc: IEngageMessage) { return EngageMessages.create({ ...doc, deliveryReports: {}, createdDate: new Date(), }); } /** * Update engage message */ public static async updateEngageMessage(_id: string, doc: IEngageMessage) { const message = await EngageMessages.findOne({ _id }); if (message && message.kind === 'manual') { throw new Error('Can not update manual message'); } await EngageMessages.updateOne({ _id }, { $set: doc }); return EngageMessages.findOne({ _id }); } /** * Engage message set live */ public static async engageMessageSetLive(_id: string) { await EngageMessages.updateOne({ _id }, { $set: { isLive: true, isDraft: false } }); return EngageMessages.findOne({ _id }); } /** * Engage message set pause */ public static async engageMessageSetPause(_id: string) { await EngageMessages.updateOne({ _id }, { $set: { isLive: false } }); return EngageMessages.findOne({ _id }); } /** * Remove engage message */ public static async removeEngageMessage(_id: string) { const message = await EngageMessages.findOne({ _id }); if (!message) { throw new Error(`Engage message not found with id ${_id}`); } if (message.kind === 'manual') { throw new Error('Can not remove manual message'); } return message.remove(); } /** * Save matched customer ids */ public static async setCustomerIds(_id: string, customers: ICustomerDocument[]) { await EngageMessages.updateOne({ _id }, { $set: { customerIds: customers.map(customer => customer._id) } }); return EngageMessages.findOne({ _id }); } /** * Add new delivery report */ public static async addNewDeliveryReport(_id: string, mailMessageId: string, customerId: string) { await EngageMessages.updateOne( { _id }, { $set: { [`deliveryReports.${mailMessageId}`]: { customerId, status: 'pending', }, }, }, ); return EngageMessages.findOne({ _id }); } /** * Change delivery report status */ public static async changeDeliveryReportStatus(headers: IHeaders, status: string) { const { engageMessageId, mailId, customerId } = headers; const customer = await Customers.findOne({ _id: customerId }); if (!customer) { throw new Error('Change Delivery Report Status: Customer not found'); } if (status === 'complaint' || status === 'bounce') { await Customers.updateOne({ _id: customer._id }, { $set: { doNotDisturb: 'Yes' } }); } await EngageMessages.updateOne( { _id: engageMessageId }, { $set: { [`deliveryReports.${mailId}.status`]: status, }, }, ); return EngageMessages.findOne({ _id: engageMessageId }); } /** * Transfers customers' engage messages to another customer */ public static async changeCustomer(newCustomerId: string, customerIds: string[]) { for (const customerId of customerIds) { // Updating every engage messages of customer await EngageMessages.updateMany( { customerIds: { $in: [customerId] } }, { $push: { customerIds: newCustomerId } }, ); await EngageMessages.updateMany({ customerIds: { $in: [customerId] } }, { $pull: { customerIds: customerId } }); // updating every engage messages of customer participated in await EngageMessages.updateMany( { messengerReceivedCustomerIds: { $in: [customerId] } }, { $push: { messengerReceivedCustomerIds: newCustomerId } }, ); await EngageMessages.updateMany( { messengerReceivedCustomerIds: { $in: [customerId] } }, { $pull: { messengerReceivedCustomerIds: customerId } }, ); } return EngageMessages.find({ customerIds: newCustomerId }); } /** * Removes customer Engages */ public static async removeCustomerEngages(customerId: string) { // Removing customer from engage messages await EngageMessages.updateMany( { messengerReceivedCustomerIds: { $in: [customerId] } }, { $pull: { messengerReceivedCustomerIds: { $in: [customerId] } } }, ); return EngageMessages.updateMany({ customerIds: customerId }, { $pull: { customerIds: customerId } }); } /** * Increase engage message stat by 1 */ public static async updateStats(engageMessageId: string, stat: string) { return EngageMessages.updateOne({ _id: engageMessageId }, { $inc: { [`stats.${stat}`]: 1 } }); } } engageMessageSchema.loadClass(Message); return engageMessageSchema; }; loadClass(); // tslint:disable-next-line const EngageMessages = model<IEngageMessageDocument, IEngageMessageModel>('engage_messages', engageMessageSchema); export default EngageMessages;
STACK_EDU
How to port a game to the Nintendo switch? The Nintendo Switch was recently released to massive success, and many popular games are already being ported to the new console, including Grand Theft Auto 5 and Skyrim. There are some game developers who don’t know how to port their game to the Switch due to its complicated architecture, but you can make this process easy with this article! Below are the steps that will show you how to port your game from any platform to the Nintendo Switch, saving both time and money. 1) Decide if the game is worth porting. This is one of the first important decisions you will make when trying to decide if you want to know how to port a game to switch. If the game you are trying to port is extremely popular, it might be a good idea to port the game to the Nintendo Switch. This can potentially increase your game sales because of how popular the Switch will become. But if you’re developing a new, unpopular game that only has a few sales to its name, then it would not be worth porting your game to Switch because you would gain little profit from it. 2) Determine your budget for the porting process. Porting your game to the Nintendo Switch could cost a lot of money. But if you are aware of what you will spend, you will be able to make a more informed decision on whether it would be worth spending so much on a single game. The most expensive part about developing and porting a game is actually the game engine itself. There are quite a few different engines available for developing games, each with its own pros and cons. You could possibly use your own engine and build it from scratch, but this can be expensive and time-consuming. Try out Unity as an inexpensive option to play around with different possible options before making the final choice. It’s very easy to use, and some online retailers even offer a free license year with every purchase. Unity allows you to create 2D games, which the Nintendo Switch doesn’t support natively. But if you want to keep running on the same engine and achieve good performance, Unity is totally fine! You can then check out Unreal Engine 4 and see how it might work out for you. 3) Decide if you want to convert the game to 2D or 3D. Depending on your game’s popularity, it might be worth trying to port it again if the first attempt at porting failed. But this means you will have to go through all of the steps again and make another decision on whether it’s worth converting the game from 3D to 2D or vice versa. Try to think about your budget, as well as how small the audience for this game is and if you would be willing to put in the time and effort to port the game again. 4) Pick a game engine. If you decide to convert your game from 3D to 2D, choose the game engine that will be better for you and your team. There are several options available, but Unity is probably the best choice of them all. It’s extremely easy to use, especially because it comes with its own online editor! You can also create 2D games on Unity, which is good if you think the Nintendo Switch doesn’t support 3D games very well. If you think the Switch will support 3D games natively, then Unreal Engine 4 might be a better option for you. 5) Design the layout for the game’s menu. A lot of games on the Nintendo Switch have a tutorial and a menu, which is important to their success. The tutorials make sure that players learn every button and how to play the game, while the menu allows them to customize their experience. You should try to imitate these designs as much as possible because it will help your game stand out on Switch. See what other games your target audience has played and try emulating some of their designs. You can also check out the Nintendo Switch’s menus and see how they work so you can create something similar in your own game. 6) Edit all of the different buttons used to control your game. The Nintendo Switch uses joy-con controllers, which means you will have to edit all of the buttons in your game. Some different buttons that you need to edit include the joy-con thumbsticks, the directional pad, and A/B/X/Y. It’s also worth testing out all of the different buttons on both left and right joy-con controllers as well as different joy-con combinations. You don’t want to leave any mistakes behind in your game because it could be a real problem later. 7) Design a map for your game. You can design the map for your game in many different ways, so it’s worth trying something a little different. Begin by looking at the Nintendo Switch’s menu system to see how they work. You can also try emulating the menu design of their games to see how it works on your game. If you don’t know what map you need, try going with a tower defense game and see how this type of map works on the Nintendo Switch! The Nintendo Switch also supports a couch mode, which means you could make a tower defense or puzzle game that has players sitting next to each other. This is a lot of fun and could give your game an extra dimension no matter what it is. 8) Test the game on multiple devices. You should test your game on multiple devices, ensuring that it is compatible with the Nintendo Switch and can run smoothly. There are many different types of sensors in the Nintendo Switch, along with different wireless connection standards as well. So be sure you’re testing your game on these different devices as much as possible to avoid any major issues later. You should also consider converting your game from 2D to 3D or vice versa if you don’t already support both options. 9) Make a patch if you need one. You won’t really need to make a patch if you are using a game engine that supports this, and it works fine, but it’s a good idea to do so anyway because it helps fix the problems that might have been introduced during development. If the game crashes or has errors, or your game screen on the Nintendo Switch isn’t displaying correctly, patching could help fix these issues. 10) Release your game on the Nintendo eShop. To release your game on the Nintendo eShop, you must review some requirements. First and easiest, you will have to create an account with Nintendo. Once approved, you can log in and start submitting your game! You will also have to create a launch date and set a price for your game. You should also make sure that your game is compatible with the Nintendo eShop and use keywords related to the game in the description if possible. In conclusion, the Nintendo Switch is a very popular console, especially because it can be played in various modes. So if you think your game would be well-received on this new platform, you should try to port it so that your game can reach all potential customers. It’s also important to note that you can make and play 2D games on the Nintendo Switch, which is perfect for everyone who doesn’t have much experience with 3D game development. But you should also take into account some of the more technical issues when porting your game. The Nintendo Switch uses joy-con controllers instead of joysticks, which can make making and playing a camera-sensitive game more difficult. Yuriy Denisyuk is Game Production Lead at Pingle Studio. He’s responsible for successfully managing the Game Production pipeline. Yuriy is this lucky person who plays the best games for work in order to keep up with trends and create new ones. He likes writing, reading Manga, fantasy and professional literature in his free time.
OPCFW_CODE
in my charts i show up the loading animation via which works fine. Problem is, that the animation stops (progress gif is still shown but does not move) as soon as data is delivered from server via JSON to the client and data processing starts at the browser, which takes a few seconds. Is there a way to show an animation, which does not stop while the browser renders charts? I guess this behavior is browser-related and not kendo ui specific. 9 Answers, 1 is accepted This behavior could be observed when huge amounts of data are processed, since rendering a lot of points on the Chart is a very CPU intensive task. I also tried to reproduce this behavior here, but the progress animation stopped and immediately after that disappeared, then the data was displayed. Could you please check the example and let me know if I am doing something differently? I added an alert to your example to see when data loading is finished an data processing in the browser starts: As far as I can see loading takes a long time (while animation works) but data processing at browser in your example is done very quickly (less than a second on my machine). In my case the processing (rendering the chart in browser) takes a few seconds. To reproduce this behavior please use 3 series with 1000 data points each on a stacked area chart (that's what I'm doing). You'll see that while data is loading the animation works (in my case data is loaded very quickly) and it stops as soon as rendering chart starts. besides you answered my initial question I've got a nother question on this topic. - is it possible to interrupt the rendering (execution of appropriate JS code) of the chart? The idea is to let the user decide after a certain period of time if he likes to continue rendering the chart or not. If I increase the amount of data that is passed to the chart from server (all aggregating etc is already done on the server) I'll run into the browsers alert 'long running script blah blah'), which really should not show up since it looks like an error message or something like that... Investigating this issue I came up with the following questions/suggestions: - Would it be possible (for telerik) to do the rendering of chart in a JS web worker (JS background thread) - Is the rendering code cut into small pieces (common solution to avoid browser message mentioned above) using setTimeout()? Would this be possible? - Wouldn't 1.) or 2.) make the browser more responsive during chart rendering? E.g. : - keep loading animation moving while chart is rendered - letting the user decide to stop the script if it takes too long - suppresssing browser message caused by long running script Thanks in advance Thank you for your feedback! At this point there is no mechanism that could be used to interrupt the rendering, however I would recommend you to submit these suggestions at the KendoUI UserVoice portal, so we and other members of the community can evaluate, comment on and vote for them. Highly voted suggestions are often implemented in future Kendo UI releases. Did you get some resolution to that issue , if yes can you please share it . I am facing the same issues I need that loading animation keeps moving while chart is rendered completely , as in my case also the number of data points are too- many ,which is causing the browser to take some time to render chart . sorry we did not find any solution to this problem except not showing the loading animation... As far as I know Telerik did not implement any of the suggested features (posted 17, January) Maybe you'd like to join our suggestion at (add votes) It's been a while but I think I've found a workaround. I discovered that in my case the labels on category axis slowed rendering (by 10 times) of the chart that means rendering a chart with 10,000 points without labels is done (on my machine) within 2 seconds. So my suggestion is to try to render your chart(s) without labels and if it speeds things really up use the step property (according to the amount of data I now use values of 6, 12 up to 720) of the labels to get better rendering performance and therefore avoid the frozen loading gif. Hope that helps
OPCFW_CODE
import numpy as np # Pre: "iterable" is a list type, "value" is an iterable whose # elements have comparison operators, "key" is a function to use # for sorting # Post: A binary search over the iterable is performed and "value" is # inserted in a sorted order left of anything with equal key, # "key" defaults to sorting by the last element of "value" def insert_sorted(iterable, value, key=lambda i:i[-1]): low = 0 high = len(iterable) index = (low + high) // 2 while high != low: if key(iterable[index]) >= key(value): high = index elif key(iterable[index]) < key(value): low = index + 1 index = (low + high) // 2 iterable.insert(index, value) class Rect: # Pre: "center" is a numpy array that represents the center of # this hyper rectangle, # "sides" is a numpy array that represents the lengths of # each of the sides of this hyper rectangle (all positive) # Post: Member data stored for use def __init__(self, center, sides, obj_value): self.center = center self.sides = sides self.diameter = max(sides) self.obj = obj_value # Post: Returns a numpy array with all new center points along the # primary axis, generated by shifting one-third of the # longest side length on each axis of that length, also # sorted in lexicographical order def divide(self): shift = self.diameter / 3.0 mask = np.where(self.sides >= self.diameter) ident = np.identity(self.center.shape[0])[mask] lower = self.center - ident*shift upper = self.center + ident*shift return (lower, upper) # Pre: "rects" is a dictionary of {length:sorted list of rectangles} # where the sorted list is sorted by objective function value, # "to_beat" is the objective value to beat # Post: Figures out which rectangles are potentially optimal by first # getting the best rectangle of each diameter, then # identifying those of that set that are along the convex hull # of the plot of diameter versus objective function value def get_potential_best(rects, to_beat): # Get the list of (diameter, best objective function value) and # sort the list (by default sorts by diameter, which are all unique) to_divide = sorted((d, rects[d][0].obj) for d in rects) # Add the objective value that needs to be beaten to the front to_divide = [ (0, to_beat) ] + to_divide # Use a loop to find the convex hull slopes = [] # Continue looping until we have made it to the end of the list while len(slopes) + 1 < len(to_divide): curr = len(slopes) + 1 # Get the slope from previous point to the current point slope = (to_divide[curr][1] - to_divide[curr-1][1]) / \ (to_divide[curr][0] - to_divide[curr-1][0]) # If no points are in the hull yet, or slope is increasing if len(slopes) == 0 or slope > slopes[-1]: # Add this point to the hull and increment "current" counter slopes.append(slope) else: # Else the previous point wasn't actually on the hull # Remove that point from potential hull points and its slope to_divide.pop(curr-1) slopes.pop(-1) # Remove the extra starting point added for cutoff to_divide.pop(0) # Return the list of increasing diameters of potentially optimal rects return [diameter for (diameter, obj) in to_divide] # Pre: "objective" a numpy array and returns an object that has a # comparison operator # "halt" is a function that takes no parameters and returns # "False" when the algorithm should continue and "True" when # it should halt # "bounds" is a list of tuples (lower,upper) for each parameter # "solution" an initial solution that is disregarded. # "args" are additional arguments that need to be passed to "objective". # Post: Returns an (objective value, solution) tuple, best achieved def DiRect(objective, halt, bounds, _=None, min_improvement=0.0001, args=tuple()): # Extract range information for each dimension ranges = np.array( [ upper-lower for (lower,upper) in bounds] ) lowers = np.array( [lower for (lower,_) in bounds] ) # Define a new objective function in terms of the unit hypercube obj_func = lambda s: objective( (s*ranges) + lowers, *args ) # Initialize the first rectangle (the entire area) center = np.array([0.5] * len(bounds)) sides = np.array([1.0] * len(bounds)) r = Rect(center, sides, obj_func(center)) # Initialize the holder for all rectangles rects = { r.diameter:[ r ] } best_rect = r # DiRect primary search loop of dividing potentially optimal rectangles while not halt(): # Establish a starting point based on the best function value to_beat = best_rect.obj - abs(best_rect.obj * min_improvement) # Get the list of rectangles that are potentially optimal # (best obj from each diameter, some excluded w/ convex hull) to_divide = get_potential_best(rects, to_beat) # Divide all of the potentially optimal rectangles for diam in to_divide: # Get all rectangles at this diameter with equal objective value to_split = [rects[diam].pop(0)] obj = to_split[0].obj while len(rects[diam]) > 0 and rects[diam][0].obj <= obj: to_split.append( rects[diam].pop(0) ) # Remove lengths that have no more rectangles if len(rects[diam]) == 0: rects.pop(diam) # Order them lexicographically by center point location to_split.sort( key=lambda i: list(i.center) ) # Cycle through these rectangles and divide them for r in to_split: diameter = r.diameter # Get the sample points determined by division lower, upper = r.divide() # Calculate the objective function value for all of these obj_values = [(i, obj_func(lower[i]), obj_func(upper[i])) for i in range(lower.shape[0])] # Sort these pairs of lower and upper sample points by # their minimum objective function values obj_values.sort(key=lambda i: min(i[1], i[2])) # Cycle through new centers and objective values # starting with the pair of (lower, upper) with best value for index, l_obj, u_obj in obj_values: # Find out which dimension the "delta" is on and its magnitude dimension = np.argmax(r.center - lower[index]) shift = r.center[dimension] - lower[index][dimension] # Adjust the side length of the center point to be # that shift (since it is 1/3 the original length) r.sides[dimension] = shift # Initialize the two new rectangles # (same dimensions as the center point) low_rect = Rect(lower[index], r.sides.copy(), l_obj) up_rect = Rect(upper[index], r.sides.copy(), u_obj) # The diameters are all the same, but verify that # there is a place in the master list for these new rects if low_rect.diameter not in rects: rects[low_rect.diameter] = [] # Insert the new rectangles into the master list # sorted by objective function value insert_sorted(rects[low_rect.diameter], low_rect, key=lambda i: i.obj) insert_sorted(rects[up_rect.diameter], up_rect, key=lambda i: i.obj) # Update the "best rect" seen so far if low_rect.obj < best_rect.obj: # Copy in order to avoid side effects of division best_rect = Rect(low_rect.center, low_rect.sides.copy(), low_rect.obj) if up_rect.obj < best_rect.obj: best_rect = Rect(up_rect.center, up_rect.sides.copy(), up_rect.obj) # Update the diameter of the center point r.diameter = max(r.sides) # Finally, re-insert the divided center point insert_sorted(rects[r.diameter], r, key=lambda i:i.obj) # Check halting condition, return best (obj, point) if halt(): return (best_rect.obj, best_rect.center*ranges + lowers) # Primary while loop terminated, return best (obj, point) return (best_rect.obj, best_rect.center*ranges + lowers)
STACK_EDU
PSH::PFE - Programmer's File Editor plugin for PSH.pm use PSH; use PSH::PFE; ... PSH::prompt; psh$ ^OM File::Find psh$ ^QA \$(\w+) Replace: \U\$$1 =head1 DESCRIPTION This module installs another special character into PSH, that allows you to easily send commands to PFE (Programmer's File Editor), to search&replace, open perl modules and other files etc. etc. This is only an experimental version so do not expect too much. It was intended especialy to show you what can be done. The module doesn't realy provide any functions to your script, it was meant to be used in pure psh. You add the commands defined here to PSH.pm by either use PSH::PFE; or use PSH::PFE 'character'; This adds the character (by default '^') to PSH's specials and ties function &PSH::PFE to it. This function simply gets from PSH the rest of the line after the special character you specified and split's it on the first whitespace. It then uses the first part as a name of a function in package PSH::PFE and passes the rest to it as it's only argument. The function name is uppercased before ussage! The case of commands is unimportant ! psh$ ^QA regexp Replace: replacement string This command searches the rest of the active document after cursor for the regexp and replaces it with the specified string. It'as implemented as s/regexp/replacement/g. psh$ ^QAi regexp Replace: replacement string This command searches the rest of the active document after cursor for the regexp and replaces it with the specified string. The match is case insensitive. It'as implemented as s/regexp/replacement/gi. psh$ ^> $variable psh$ ^> @variable psh$ ^> code Sets the variable to either the selected text or the current line. If the variable is a scalar it's set to the whole text. If it is an array it gets the text split into lines. If the parameter to the ^> command doesn't start with neither $ nor @ it's thought to be a function name. ^> $foo => eval "\$foo = <<'*EnD_DnE*';\n$text\n*EnD_DnE*\n" ^> @foo => eval "\@foo = split /\n/s, <<'*EnD_DnE*';\n$text\n*EnD_DnE*\n" ^> foo => eval "foo <<'*EnD_DnE*';\n$text\n*EnD_DnE*\n" psh$ ^< $variable psh$ ^< code Evaluates the code and sends the result to PFE on the current position of cursor.
OPCFW_CODE
There are a variety of effective – and interesting – methods to help you synthesise and make sense of all the data you've gathered during your research in design thinking, the first phase is the research or empathise phase, then you move on to the define phase, in which you utilise several available. Define synthesise synthesise synonyms, synthesise pronunciation, synthesise translation, english dictionary definition of synthesise past participle: synthesised gerund: synthesising imperative synthesise synthesise present i synthesise you synthesise he/she/it synthesises we synthesise. In today's post i'm sharing information i gave to my students about the differences between summarizing and synthesizing information in terms of reading strategies and research is a synthesis question http://wwwpassgedcom/student_blogs /maria/2008/10/07/ged-reading-what-is-a-synthesis-question. Synthesis or synthesize may also refer to: contents [hide] 1 science 11 chemistry and biochemistry 12 physics 13 electronics 14 speech and sound creation 2 humanities 3 other uses 4 see also science chemistry and biochemistry chemical synthesis, the execution of chemical reactions to form a more. Define synthesis: the composition or combination of parts or elements so as to form a whole — synthesis in a sentence. 1) state goals of this tutorial 2) what does it mean to synthesize 3) why synthesizing is important 4) how to, and not to, synthesize 5) detailed example of boring and confusing for the reader a synthesis gives enough information about the study for the reader to imagine it but really highlights what is important about the. Au - dictionary definition: synthesis dictionary definition of synthesis reasoning from the general to the particular (or from cause to effect) the combination of ideas into a complex whole the process of producing a chemical compound derivational morphology: synthesize, synthesise, deduce, infer, deduct, derive. Define synthesise and get synonyms what is synthesise synthesise meaning, pronunciation and more by macmillan dictionary. Synthesizing information west virginia department of educationthesaurus meaning synthesize in longman dictionary what is synthesis definition and businessdi. Definition of synthesize written for english language learners from the merriam- webster learner's dictionary with audio pronunciations, usage examples, and count/noncount noun labels synthesize (verb) save synthesize verb also british synthesise /ˈsɪnθəˌsaɪz/ synthesizes synthesized synthesizing synthesize. Verb (third-person singular simple present synthesises, present participle synthesising, simple past and past participle synthesised) non-oxford british english standard spelling of synthesize english wiktionary available under cc- by-sa license. Synthesissynthesessynthesisessynthesisesynthesistsynthesizesynthetica l the synthesis family usage examples all sourcesfictionarts / culturenews businesssportsscience / medtechnology today, a highly controversial study in which researchers synthesized a smallpox relative from scratch is finally seeing the light of. Synthesise download a summary of the tasks, options, and approaches associated with synthesising data from one or more evaluations bringing together data into an overall conclusion and judgement is important for individual evaluations and also when summarising evidence from multiple evaluations. Synthesizing takes the process of summarizing one step further instead of just restating the important points from text, synthesizing involves combining ideas and allowing an evolving understanding of text into the book defines synthesizing as “[creating] original insights, perspectives, and understandings by reflecting on. Synthesise definition, to form (a material or abstract entity) by combining parts or elements (opposed to analyze): to synthesize a statement see more. Synthesised definition, to form (a material or abstract entity) by combining parts or elements (opposed to analyze): to synthesize a statement see more. Synthesis - meaning in hindi, what is meaning of synthesis in hindi dictionary, audio pronunciation, synonyms and definitions of synthesis in hindi and english. English verb synthesise (third-person singular simple present synthesises, present participle synthesising, simple past and past participle synthesised) non-oxford british english standard spelling of synthesize retrieved from https://enwiktionaryorg/ w/ indexphptitle=synthesise &oldid= 45615511 categories.
OPCFW_CODE
Download the best graphical user interface software for version management of GIT on Windows, macOS or Linux … Git is the most popular tool for managing file and application versions. Git was developed to primarily handle source codes for open source software. Github is now a widely used application among all open source contributors and independent developers. Although Git is primarily a CUI-based application, the GUI can also be configured to work with Git. For new users, the graphical user interface is a great way to master Git operations. If you are also one of them looking for a GUI solution for Git, I have a few options listed below. What is a Git GUI? When the Linux kernel was developed open source, the idea of Git first appeared. Using Git, you can organize the versions of the files. In any development project, all the different versions are stored in a distributed way among all the contributors of the project. When a user changes something in the source code or develops something new, the result is saved in a different location, under the same tree structure. The software architect or project manager decides which version should be included in the final version. This is where Git comes into play, where Git makes it easy to manage, edit, and version control Git. Git works normally using the command line interface. But for a better user experience, less complexity, graphical interfaces have emerged. By using the GUI it becomes easier to navigate and use the Git cPanel. In a simple graphical interface, it can simplify and speed up Git operations on the user side. Benefits of the Git GUI When switching from CUI to GUI, there are many benefits to be expected. Some of them are listed below. - You get to grips with Git UI very quickly. - Speed up user operations. - Looks nice and modern - Training subordinates becomes easier - Easy drag and drop - You can focus on the main job, instead of neatly typing - Get rid of CUI phobia. The 5 best Git GUIs for Windows, Linux, and macOS By choosing or comparing among many Git GUIs, you can easily spot which one is best for you by looking at the features provided. Git GUIs are forced to be used to make operations on Git easier and more convenient. So if a provided GUI does not have enough features or an easy to use user interface, then what is the point of such a GUI application? Additionally, you may spot the licensing and integration differences of several online repositories such as Github or Bitbucket. Take a walk through the discussion below where I spotted some of the important features of the best Git GUIs available on the market. Sourcetree – GUI for Git Source Tree has a powerful team of developers behind them, and they provide users with extensive support with their standard Git GUI solution. But Sourcetree Ui is full of features, so it takes a long time to master the app. - The package is free and available on both the Windows and Mac. - Commit, Push, Pull or Merge, all commands can be used with just one click. - You can connect other repositories like Bitbucket, Stash, Microsoft TFS, etc. with Git. - You can manage your Git repositories from a single tenant, whether it’s hosted or local. - For extended support, functions such as Patch-Handling, Rebase, Regal and Cherry Picking can be used. Tower 2.5 – A simple and powerful Git GUI Just like Sourcetree, Tower also comes with extensive functionality. The tower is well organized and simpler if you compare it to Sourcetree. The latest stable version is 2.5 for Tower, which is popular with users. But Tower is not available for free unlike Sourcetree, which is a downside. Most of the time, Tower is chosen by users because of the simplicity of the user interface. - You can use the trial version before purchasing. - Supports Windows, Mac and Linux platforms. - Has an Undo option using which you undo everything in a project with just one click. - Drag and drop function available. - Cloning and reporting is as easy as a click. - You can automate boring things, using third party scripts. - Multi-window support is shown. - Also powered by a built-in difference viewer. - You can create and apply fixes to files. Gitkraken – GUI + CLI Gitkraken is available for the Windows, Linux, and Mac, and can be linked to multiple online repositories. Gitkraken offers an intuitive user interface with a large number of features. Compared to Sourcetree, Gitkraken is not free for personal users. You can use the 7-day free period but then you have to go for a subscription, which ranges from $ 5 to $ 9 per month. - Gitkraken has a built-in code editor. - Unlike Tower, Gitkraken is free for open source developers. - Commit history visualization features. - Set of counterfeatures provided to avoid merge conflict. - One-click undo function is added - Git’s team collaboration and deep integration features are fulfilled. Giteye – office for Git Giteye is a very simple yet efficient GUI for Git. If you have little experience with Git CUI, then and are looking for a free Git GUI, Giteye is for you. The Giteye Git graphical interface is available on Windows, OSX and Linux platforms and is supported on 32- and 64-bit systems. - Supports multiple repositories including Teamforge, Cloudforge, and Github. - Although Giteye is a proprietary application of collab.net, it is free to use. - Team collaboration features are included. - Some of the important benefits are supply chain management, distributed version control. - Defect tracking, agile planning, code reviews, and authoring services made easy with central visibility features. Gitbox for macOS If you are a Git CUI user and want to switch to a GUI, Gitbox is the best option for you. Using Gitbox, you can view the CLI as a clear picture. Gitbox is very easy to use for experienced Git users. Gitbox is free to use on the Windows, but it is a paid app for Mac users. - Steps, branches, commits, and submodules can all be monitored with the enhanced performance update manager feature. - Gitbox can fix bugs when cloning URLs, submodule paths with spaces. - Gitbox can automatically resolve several error messages. - Auto-recovery from remote servers and visual difference between branches help user track changes. - Drag and drop and quick preview features are added in the new update. - Instant synchronization with the file system and smooth integration with Terminal are some of the important features of Giteye. Using the command line interface, you can take advantage of all the features of Git, but for free. But the GUI is applied to increase the production speed. But for a deeper understanding in many cases you will be forced to use the CUI over and over again. Thus, without learning the basics of Git CUI, it is not possible to use Git professionally just by implementing GUI applications. For faster processing, team collaboration, and increased operational speed, Git GUIs may be a viable choice.
OPCFW_CODE
Knowledge can only be enhanced and is thoroughly understood when it is put in practice. Did you know everything in Linux is a file? Yes, including network connections, processes, everything! How does it all work? Navigating Linux directory structure helps understand what makes it so flexible and easier to access, just like BFF maintaining your ex’ list :P Note: While Linux has a set of standards, you can put any file anywhere and make it work. Some basic commands that will come handy while exploring Linux directories are tree: Install it with sudo apt install tree. It helps to visualise how directory/sub-directory/ files are placed. cd: to navigate around ls: to list folders/files in the directory 🐼/bin: This is where all basic magic happens. This contains all the binaries essential to run some program or application. Well, what magic was I talking about ?? Commands like ls, pwd, cat, chmod, etc. All these cute executable binaries are present here. As I write, I wonder what happens if chmod or such binary is deleted?? 🐼/boot: I was curious about how does the system gets started! But I will keep it for another post. This folder contains all those files that connect the system together, such as connecting keyboard, mouse, webcam, style etc. This folder is slightly different from servers and your laptops. Why? Simply because servers don’t have a keyboard or mouse or any such external stuff. Don’t even try to mess any files here, you may not be able to run Linux, and it becomes difficult to repair. 🐼/dev: It contains all the essential files of devices. It is either generated at boot time or later such as pen drive, printer, USB. Device entries are done here on the fly. If you often have seen/dev/null, it is high time to know about it. /dev/null is like a void place where everything you send here has no existence. You can send all unwanted logs from every application to /dev/null. 🐼/etc: Popularly known as “et cetera”. It contains most of the system configuration such as user and their password etc/password, contains the name of your system, names of the machines on your network, partitions of hard disk and so on. 🐼/home: Home is where every user lives. They’ll have their own little rooms like /home/ubuntu or /home/ashwini or /home/cakejar etc. So if you are creating a user, it is good practice to create a home directory for it like /home/user and maintain user-related stuff here. 🐼/lib: Its a special one. It contains all the important libraries. Applications refer to code snippets from here for their usage. They can be dependencies, third-party libraries, etc. It also contains kernel modules to run webcam, video card, wifi, graphic cards, etc. 🐼/opt: Third-party software live here. Try installing an open-source code to see what happens here. It is accessible by all users. 🐼/proc: This folder is a beast in itself. It deserves its own post. Briefly, it contains everything related to running processes in Linux. i.e. files, pointers, file descriptors, CPU usage, limits and a lot more. Do not mess with this directory but heavily recommend to dig it through. 🐼/root: Home directory of the root user. Do not mess with superuser’s directory. 🐼/run: It holds some files that applications can put during runtime.. such as sockets, pid files, etc. Do not mess with this directory. 🐼/sbin: Contains binaries which can be only run by root/sudo users. Binaries like fdisk, route, iconfig, init, mkfs and so on. 🐼/usr: Also called as UserSharedResources. Files that live here is shared by applications and services. This can contain applications, libraries, patch, scripts, etc. 🐼/sys: Virtual directory just like /proc and /dev contains information about the devices connected to the computer or known to Linux kernel. 🐼/tmp: All temporary files are stored here, but these get deleted on the bootup. You can try storing your data here which you don’t want to have an account of. 🐼/var: All changeable files are stored here, hence the name variables. Example /var/logs register the events that happen on the system like kernel logs, firewall logs, printer jobs in the queue, database failed logs etc. 🐼/media: used to host CDs, floppy disks 🐼/mnt: Typically used for mounting external disks like pen drives This is not an exhaustive list, ignoring a few unimportant ones. Some of the directories like media, mnt, srv are no longer widely used.
OPCFW_CODE
User-Centered Design (UCD) is an approach to product development that focuses on creating user-friendly experiences by understanding and incorporating the needs, preferences, and limitations of the target users throughout the entire design process. It involves gathering feedback from users and using that feedback to inform design decisions. UCD aims to optimize user satisfaction, efficiency, and usability by putting the user at the center of the design process. Examples of User-Centered Design User-Centered Design can be seen in various products and services that prioritize the needs of the users. For instance, consider a mobile banking application that allows users to easily navigate through different functions, provides clear instructions, and offers intuitive features based on user research and feedback. Another example is a website that adjusts its layout and content based on the user's device, making it accessible and easy to use across different platforms. Importance of User-Centered Design User-Centered Design is crucial for creating products and services that meet the expectations and requirements of the target users. By involving users from the early stages of the design process, it helps to: - Enhance user satisfaction: By understanding users' needs, preferences, and pain points, designers can create solutions that address their specific requirements, resulting in higher user satisfaction. - Improve usability: User-Centered Design emphasizes usability testing and feedback loops, enabling designers to identify and rectify usability issues before the final product is launched. - Increase efficiency: When products and services are designed with users in mind, they are more likely to be efficient and intuitive, reducing the learning curve and enhancing user productivity. - Reduce costs: Incorporating user feedback early on in the design process can help identify potential problems or areas of improvement, reducing the need for costly redesigns or fixes after the product launch. How to Use User-Centered Design - Understand your users: Conduct user research to gain insights into the target audience's needs, motivations, and behaviors. Identify user personas and create empathy maps to understand their goals and pain points. - Involve users throughout the process: Regularly engage users in usability testing, interviews, and feedback sessions to validate and refine design concepts. Iteratively incorporate their feedback into the design. - Create intuitive interfaces: Design interfaces that are easy to navigate, with clear information architecture and visual hierarchy. Use familiar conventions and minimize cognitive load. - Prioritize accessibility: Ensure that your design is accessible to users with disabilities, considering factors such as color contrast, screen reader compatibility, and keyboard navigation. - Test and iterate: Continuously test your design with real users and gather feedback. Use the insights gained to make iterative improvements and refine the user experience. Useful Tips for User-Centered Design - Conduct user testing early and often to identify usability issues and make necessary improvements. - Use prototyping tools to quickly create interactive mockups for testing and feedback. - Use user-friendly language and avoid jargon to make your design more approachable. - Consider the context of use, such as the user's environment and the device they are using, when designing the user experience. - Collaborate with cross-functional teams, including designers, developers, and stakeholders, to ensure a holistic and user-centered approach.
OPCFW_CODE
Here I discuss four ways to improve your password choices. This is the best advice I can give to help people choose stronger passwords. A bit about poor password choice Often users choose simple passwords, and use the same password for all their services, both at work and at home. These are both bad ideas. Simple passwords are easy and fast to crack with widely available automated tools. I know from experience, as I have cracked many passwords recently under test conditions, many in under a second, and certainly most in under 5 minutes. Using the same password in multiple places means that once a password is compromised on one system, say your home computer, an attacker could then be able to access your work login, your email account, Twitter, Facebook, Youtube, PayPal, eBay, Amazon, and any other online services you use with the same password. Attackers would not be targeting you personally. Modern malware can automate many hacking techniques and attack many thousands of users at once, globally. Malicious hacking is big business, and there is a whole criminal subculture geared-up to make money from stollen credentials. Many people store a lot of information online these days. Imagine if you lost access to your personal accounts and profiles, or worse; someone copied, added or removed information, or sent messages to all your friends containing clever scams or viruses, or transfered some of your money through a series of stolen bank accounts. So what can users do to choose better passwords? Having cracked many passwords with a variety of different tools and techniques, here are the four best pieces of advice I can offer. - Password length is the most important factor. Try to use a "pass-phrase", i.e. a group of words that is longer than 15 characters. - This is a sufficient length to make brute-force and hybrid attacks unfeasible with todays computing power. - Using 15 characters or more also means that the password cannot be stored as an LM hash. (This is one of the weakest forms of password storage, that Microsoft still include on Windows systems today - for historical reasons, but it is a really big weakness) - Add some numbers and symbols - Though this does not affect password strength as much as length does, it can be helpful in making your password more unique, by using more of the available key-space. - Here are some examples - Use different passwords for the accounts you have - It may be impractical to use a different password for every login to every system, but try to use unique passwords for your core services such as - Home computer login - Email access - Bank accounts - Work login - Social networking - Test your choices, to see if the types of password you are choosing are strong - Try the Microsoft password strength checker, to get an idea of what makes a password stronger. - Try putting your current password in. If it comes up "Weak" you're not doing very well with your password choices currently. - You should be aiming to get at least in the strong category for passwords you use for important accounts. - See if you can pick a memorable password, that meets the "BEST" category. Try several attempts so you will know what is important in choosing a strong password. Now choose similar passwords for your own use. PS: I use several password dictionaries of commonly used passwords, totalling over 200 million entries. I find these very effective for password cracking, before attempting hybrid, rainbow-table, and brute-force attacks. They only take a few seconds to run for hash-cracking. Most of these dictionaries are available on the web if you look, but if you are interested in me providing some copies, let me know in the feedback section.
OPCFW_CODE
rbenv- is it risky to rely on builtin shell commands such as "set", since builtins could be implemented differently by different shells? I use zsh as my shell, and rbenv to manage my Ruby versions. I am diving into rbenv's internals to learn how it works. I type vim `which bundle` into my terminal and I see the following code: 1 #!/usr/bin/env bash 2 set -e ... After the shebang, the first line of code is set -e. Wanting to learn more about set, I type man set into my terminal, but I get: BUILTIN(1) General Commands Manual in response. From this I determine that set is a builtin command. From this StackOverflow answer, I interpret that to mean that builtins such as set could be implemented differently depending on which terminal the user is employing (i.e. bash vs zsh vs fish). To quote the answer: Every Unix shell has at least some builtin commands. These builtin commands are part of the shell, and are implemented as part of the shell's source code... Different shells have different builtins, though there will be a whole lot of overlap in the basic set. I'm assuming that the set command is part of this "basic set" that the answer refers to, right? That would make the most sense to me, since the authors of rbenv are quite experienced and I would think they're unlikely to rely on a command which could behave differently depending on the user's shell. My question is two-fold: If my above assumption is correct, where is that "basic set" of commands defined? Is it some sort of officially-recognized, industry-wide standard that all shells must adhere to? And if so, what is that industry-wide standard known as, so I can Google it and learn more? If my assumption is wrong, and if the set terminal command is not part of some broad standard, does that mean that different shells could implement set in different ways, possibly producing different behavior? And wouldn't it therefore be dangerous for a widely-used program like rbenv to rely on such a command, since it could behave differently on different users' machines? As long as you stick to features from POSIX sh, you're usually pretty safe. @Shawn OK, so I take it that POSIX is the name of the industry-wide standard that I mention in question #1 above, correct? And I see from this link that set is one of the "special built-in commands" mentioned in the POSIX standard. So I'm assuming this is the reason why it's safe to rely on set in the manner I describe in my question. Let me know if I'm mistaken with any of these assumptions.
STACK_EXCHANGE
<?php /** * Copyright (c) 2018 Wakers.cz * * @author Jiří Zapletal (http://www.wakers.cz, zapletal@wakers.cz) * */ namespace Wakers\Propel\Setup; use Nette\Neon\Neon; use Propel\Common\Config\ConfigurationManager; use Propel\Runtime\Connection\ConnectionManagerSingle; use Nette\DI\Container; use Propel\Runtime\Propel; use Tracy\Debugger; use Wakers\Propel\Debugger\DebuggerPanel; use Wakers\Propel\Debugger\Logger; class PropelSetup { /** * Cesta ke konfiguračnímu souboru * Musí být definována konstantou (nesmí se měnit) */ const NEON_CONFIG_PATH = __DIR__.'/../../../../../app/config/db.local.neon'; /** * Podpora pro ::getenv() v neon souborech * @param $content * @return string */ protected static function replaceEnvironmentVariables($content) : string { preg_match_all("/\:\:getenv\(\'(.*)\'\)/", $content, $result); $findEnvs = []; foreach ($result[1] as $envVar) { if(!in_array($envVar, $findEnvs)) { $toReplace = "::getenv('{$envVar}')"; $replaceWith = getenv($envVar); $content = str_replace($toReplace, $replaceWith, $content); unset($replaceWith); } } return $content; } /** * Vrací nastavení Propelu ze souboru db.local.neon * @return array */ public static function getAsArray() : array { $configPath = realpath(self::NEON_CONFIG_PATH); $content = file_get_contents('nette.safe://' . $configPath); $content = self::replaceEnvironmentVariables($content); $config = Neon::decode($content)['wakers-propel']; return $config; } /** * * Připojí propel k DB a nastaví výchozí připojení. * @param Container $container */ public static function setup(Container $container) : void { $config = self::getAsArray(); $configurationManager = new ConfigurationManager(NULL, $config); // Výchozí DB a její adapter $defaultConnection = $configurationManager->getConfigProperty('runtime.defaultConnection'); $adapter = $configurationManager->getConfigProperty('database.connections')[$defaultConnection]['adapter']; // Nastavení connection manageru $manager = new ConnectionManagerSingle; $manager->setConfiguration($configurationManager->getConnectionParametersArray()[$defaultConnection]); $manager->setName($defaultConnection); // Připojení manageru do service containeru $serviceContainer = Propel::getServiceContainer(); $serviceContainer->setAdapterClass($defaultConnection, $adapter); $serviceContainer->setConnectionManager($defaultConnection, $manager); $serviceContainer->setDefaultDatasource($defaultConnection); // Nastavení debug módu if ($container->parameters['debugMode']) { $connection = $serviceContainer->getConnection(); $connection->useDebug(TRUE); $serviceContainer->setLogger('defaultLogger', new Logger); Debugger::getBar()->addPanel(new DebuggerPanel); } } }
STACK_EDU
He had only about $10 in his pocket, an empty bank account, and no credit cards and no place to stay. This reminds me of the time the CEO (and owner) of the company I worked for had everyone fly to a different state in the middle of the country for our holiday party. Since this was a paid business trip, my understanding was they'd pay for flights, hotel and all meals. I got to the hotel to check in, only to find out they required a credit card deposit from me. I had just moved to this area a month or so earlier, had around $49 in my checking account (I think we were due to get our paychecks a day or so into the trip), no room on the few credit cards I had and only one friend in the area in which I lived (and knew no one, other than those from my office, in that state). I was really embarrassed being faced with this (the room deposit was $50) and not knowing how I was going to handle it (and wondering why the hell the CEO would only go to the extent of paying for the hotel but not covering everything, including room deposits). Even then, why weren't we told in advance that we were going to have to pay a room deposit? Even more idiotic, when we had reached the airport in that state, the CEO had not arranged for transportation for us to the hotel and when called by his dad asking how he was having us get to the hotel, was expecting us all (including his dad, who was the attorney for the company) to pay for our own transportation to get there (the CEO had traveled separately from us). It was pretty idiotic (his dad ended up making the call to the CEO making sure we had a way to get to the hotel at the company's cost). I would've preferred not traveling (I had the flu), but it was clear the CEO expected *everyone* to be there (we were being paid to be there -- small company with four offices spread out over four states -- two west coast, one east coast and one midwest, and we all flew to the midwest office where most of the properties were). This was the first time I'd traveled for business so I had no idea what to expect. From what I remember, there really was no organization to this trip, except some sort of bus tour of the company's properties (I worked for a real estate developer). They really "cheaped out" on the meals (I think they planned dinner the night we got there and dinner the next night but hadn't arranged breakfast the two mornings we were there or lunch the one full day -- lunch ended up being an impromptu stop at some crappy Chinese buffet after the bus tour). However, now that I think about it, we all ended up each getting an iPod from the company as our Christmas gift so, I guess, it wasn't all bad. Still, why skip covering two breakfasts, lunch, transportation between the airport/hotel and room deposits (and these were really really nice and *big* hotel rooms!)? IIRC, since I had no money and didn't know what meals were being covered (I only recall hearing about the two dinners), I believe I packed some granola bars and beverages from home in my suitcase. I think I ended up having a granola bar and diet Coke for breakfast in my hotel room before I was expected to report for the bus tour.
OPCFW_CODE
Publications Presenter Initial WIP commit for publications presenter creation. cc @binaryberry The method preview_attachment_path seems not to be defined for the documents/_attachment.html.erb template. This is the trace: undefined method `preview_attachment_path' for <ActionView::Base is too large to inspect, supressing>:ActionView::Base (ActionView::Template::Error) ./app/helpers/attachments_helper.rb:9:in `preview_path_for_attachment' ./app/views/documents/_attachment.html.erb:46:in `block in _app_views_documents__attachment_html_erb___3146423150183793533_69868133384220' ./app/views/documents/_attachment.html.erb:17:in `_app_views_documents__attachment_html_erb___3146423150183793533_69868133384220' ./app/helpers/attachments_helper.rb:14:in `block in block_attachments' ./app/helpers/attachments_helper.rb:13:in `block_attachments' ./lib/whitehall/govspeak_renderer.rb:4:in `block_attachments' ./app/presenters/publishing_api_presenters/publication.rb:56:in `documents' ./app/presenters/publishing_api_presenters/publication.rb:31:in `details' ./app/presenters/publishing_api_presenters/item.rb:28:in `content' ./app/presenters/publishing_api_presenters/edition.rb:14:in `content' ./app/workers/publishing_api_worker.rb:38:in `save_draft' ./app/workers/publishing_api_draft_worker.rb:3:in `send_item' ./app/workers/publishing_api_worker.rb:12:in `block in call' ./app/workers/publishing_api_worker.rb:10:in `call' ./app/workers/worker_base.rb:30:in `perform' ./app/workers/worker_base.rb:11:in `perform_async_in_queue' ./lib/whitehall/publishing_api.rb:25:in `save_draft_translation_async' ./lib/whitehall/publishing_api.rb:20:in `block in save_draft_async' ./lib/whitehall/publishing_api.rb:19:in `each' ./lib/whitehall/publishing_api.rb:19:in `save_draft_async' ./app/services/service_listeners/publishing_api_pusher.rb:50:in `block in perform_publishing_api_action_on_pushable_items' ./app/services/service_listeners/publishing_api_pusher.rb:49:in `each' ./app/services/service_listeners/publishing_api_pusher.rb:49:in `perform_publishing_api_action_on_pushable_items' ./app/services/service_listeners/publishing_api_pusher.rb:16:in `push' ./config/initializers/edition_services.rb:13:in `block (2 levels) in <top (required)>' ./app/services/edition_service_coordinator.rb:4:in `publish' ./app/services/edition_service.rb:45:in `notify!' ./app/services/draft_edition_updater.rb:5:in `perform!' ./app/models/html_attachment.rb:92:in `save_and_update_publishing_api' ./app/controllers/admin/attachments_controller.rb:179:in `save_attachment' ./app/controllers/admin/attachments_controller.rb:19:in `create' ./features/step_definitions/attachment_steps.rb:54:in `/^I upload an html attachment with the title "(.*?)" and the body "(.*?)"$/' features/edition-attachments.feature:11:in `And I upload an html attachment with the title "Beard Length Graphs 2012" and the body "Example **Govspeak body**"' preview_attachment_path error was fixed adding url_helpers and default_url_options in attachments_helper.rb Tests are failing on Jenkins because probably the query invoked by item.government on the Government model is not expected. E.g.: DocumentFilterPresenterTest#test_json_provides_a_list_of_documents [/home/jenkins/bundles/govuk_whitehall_branches/ruby/2.2.0/gems/activerecord-<IP_ADDRESS>/lib/active_record/connection_adapters/abstract/database_statements.rb:32]: unexpected invocation: #<ActiveRecord::ConnectionAdapters::Mysql2Adapter:0x7fc43e6e13b0>.select('SELECT `governments`.* FROM `governments` WHERE (start_date <= '2011-11-11') ORDER BY `governments`.`start_date` DESC LIMIT 1', 'Government Load', []) Is that something we want to avoid, or we should somehow modify the tests? @mgrassotti Could you update 174cc4dd07ccecc4aa400ae8a097823dc762a36c to provide more details beyond "WIP"? (or squash?) @fofr yes, sorry I didn't clean the commit history. I also need to check if the data migration and the sync checks work in dev. Commits cleaned up I see a lot of new errors on Jenkins due to a Nil value in the links presenter. Should I remove the supporting_organisations link, due to https://github.com/alphagov/whitehall/pull/2585 ? Thanks
GITHUB_ARCHIVE
RECOMMENDED: If you have Windows errors then we strongly recommend that you download and run this (Windows) Repair Tool. Sony Vaio Battery Hibernate Error Don’t forget to enable Windows Aero once your computer has booted up. Please note despite Nvidia’s claim that the Sony VAIO support will be added in a later. Huge Selection and Amazing Prices. Free Shipping on Qualified Orders. Sony put Aug 24, 2014. error: error running non-shared postrotate script for /var/log/php5-fpm.log of. run -parts: /etc/cron.daily/logrotate exited with return code 1 /etc/cron.daily/logrotate: logrotate_script: line 1: /usr/local/bin/mysql: No such file or directory error: error running shared postrotate script for. I've looked in the logs and all I can find is this: /etc/cron.daily/logrotate: error: error running shared postrotate script for Before I had posted this thread, I had rebooted the server. I just noticed an error on the terminal too and the download the latest ISPConfig version and run the setup script. missingok’ – skip any errors in any log file is missing, ‘sharedscript’s – scripts that are run script once, ‘postrotate’ and ‘endscript’ – execute script after log. Re: Error running non-shared postrotate script for /var/log/fail2ban. – Nov 24, 2016. Subject: Re: Error running non-shared postrotate script for. of '/var/log/fail2ban. log ' > run-parts: /etc/cron.daily/logrotate exited with return. Error Code 0619 Lookup area code 501 details: major cities and timezone. View the 501 area code map in Arkansas. Find the name of any phone number in area code 501. The next-gen truck — reportedly being developed with the internal code name Logrotate error – trellis – Roots Discourse – /etc/cron.daily/logrotate: * Re-opening nginx log files nginx.done. Checked my remote server and the postrotate script is service nginx rotate as @fullyint mentioned. mathewc 2016-04-21 00:26:55 UTC #5. Logrotate-Fehler für MySQL unter Ubuntu :: blog.bartlweb – 24. Apr. 2013. error: error running shared postrotate script for '/var/log/mysql.log. run-parts: /etc /cron.daily/logrotate exited with return code 1. Following are the key files that you should be aware of for logrotate to work properly. /usr/sbin/logrotate – The logrotate command itself. /etc/cron.daily. Hard Disk Error Recovery Data Recovery Wizard can recover hard disks involving Scandisk and Chkdsk Error easily in Windows 2000/XP/2003/Vista/2008/Windows 7, 8. HP PCs – Troubleshooting HP System Recovery. If the recovery partition on the hard disk drive. If there are corrupt areas on etc/cron.daily/logrotate: error: error running non-shared postrotate script for Add "notifempty" to /etc/logrotate.d/fail2ban. If fail2ban is installed but not running the log will be empty and will not be rotated. What this error means on Fail2ban? 6. Error with gunzip during logrotate. run-parts: /etc/cron.daily/logrotate exited with return code 1. Postrotate "error running shared script". 2. MySQL doesn't start after upgrading to Debian Jessie. 0. .var/log/mysql/mysql-slow.log ' run-parts: /etc/cron.daily/logrotate exited with return code 1. In my case after cloning an ESX Ubuntu VM where MySQL was installed into some other servers where we did not need mysql and after uninstalling mysql I could still see mysqld running. nodelaycompress This overrides delaycompress. The log file is compressed when it is cycled. errors address This mails logrotate errors to an address. ifempty With this, the log file is rotated even if it is empty. This is the default for logrotate. Tried running it, received an invalid command error. why daily cron isn't running on CentOS 6? 2. Logrotate does not work for httpd service. can logrotate skip postrotate script if no log rotation took place?
OPCFW_CODE
Dr. Sergio Rey, Organizer will bring together software developers from both the public/academic sector as well as the private sector who deal with tools to visualize spatial data (geovisualization), carry out exploratory spatial data analysis (ESDA) and facilitate spatial modeling (spatial regression modeling, spatial econometrics, geostatistics), with a special focus on the potential for social science applications." Dr. Luc Anselin, Organizer Meeting on Spatial Data Analysis Software Tools Upham Hotel, Santa Barbara, CA May 10-11, 2002 The Center for Spatially Integrated Social Science (CSISS) is a five-year project funded by the National Science Foundation under its program of support for infrastructure in the social and behavioral sciences. CSISS promotes an integrated approach to social science research that recognizes the importance of location, space, spatiality and place. One of the CSISS programs is devoted to "Spatial Analytic Tools" for the social sciences, that is, the development and dissemination of a powerful and easy to use suite of software for spatial data analysis, the advancement of methods of statistical analysis to account for spatial effects, and the integration of these developments with GIS capabilities. For a more detailed description of the programs and objectives of CSISS, visit our homepage at http://www.csiss.org/. In order to take stock of the state of the art, assess current impediments and identify promising strategies, a two-day "Specialist Meeting on Spatial Data Analysis Software Tools" will be convened in Santa Barbara, CA, May 10th and 11th, 2002. The meeting is organized by a steering committee, co-chaired by Luc Anselin (University of Illinois, CSISS) and Sergio Rey (San Diego State University) and consisting of Richard Berk (UCLA), Ayse Can (Fannie Mae Foundation), Di Cook (Iowa State University), Mark Gahegan (Pennsylvania State University), and Geoffrey The meeting will bring together software developers from both the public/academic sector as well as the private sector who deal with tools to visualize spatial data (geovisualization), carry out exploratory spatial data analysis (ESDA) and facilitate spatial modeling (spatial regression modeling, spatial econometrics, geostatistics), with a special focus on the potential for social science applications. These tools include a range of different approaches, such as macros and scripts for commercial statistical packages or GISes, modules developed in open source statistical and mathematical toolkits, and free standing software programs. The focus of the meeting is on software "tools" rather than on the methods per se. The objectives of the meeting are threefold: - It is an opportunity to demonstrate, showcase, and benchmark state-of-the-art tools and to interact with other specialized developers. - It will facilitate and promote a dialogue among the wide range of developers about priorities and guidelines for software design, data and model standards, inter-operability, and open environments. It is hoped that this will initiate a discussion of specific open source standards for spatial data analysis. - The meeting will also serve as a way to introduce CSISS' open source software development initiative, the "OpenSpace" project, and serve as a forum to obtain feedback and comments. Contributions are invited that: - describe technical aspects, architecture, design, principles and implementation of specific software tools for spatial data analysis - compare and review software tools for spatial data analysis in social science applications - demonstrate the application of new spatial analysis software tools to social science research questions. All participants are expected to submit an abstract as well as a final paper. The papers will be published on a CD-Rom as a Proceedings Volume, available at the time of the meeting. The meeting will not consist of these paper presentations, but instead the Proceedings are to provide a common background for discussions related to the broader themes. People interested in attending the meeting should submit a digital abstract for their contribution by e-mail to firstname.lastname@example.org by February 15, 2002. The steering committee will make a decision on the final list of participants by March 1, 2002. The abstract should be two to four pages (including figures, tables and references), in Adobe Acrobat pdf format (10pt Times Roman smallest font). The final paper (for the Proceedings Volume) should be 10 to 15 pages Adobe pdf and will be due by April Some funding assistance may be available, subject to NSF rules and prior agreement from CSISS. Please indicate if you require funding to participate. For any questions or further information, contact Luc or Sergio Rey email@example.com.
OPCFW_CODE
I fear this little utility might be usable only for a discrete set of fans. Technically speaking it’s a text-based application, but … well, I’ll let you take a look and see what you think. In principle, it’s rather simple: undistract-me simply takes note if a shell command takes longer than 10 seconds to execute. If it does, it waits until the program finishes, then throws up the alert message you see in the screenshot above. Kind of cool, in an odd way. Strictly speaking though, you’ll need all the underpinnings of a graphical desktop, plus whatever alert system is in use there, before you’ll get close to that kind of behavior. On my semi-graphical Arch system with just Openbox, I ended up adding gtk3, polkit, dconf, json-glib and a mess of themes and libraries before the git version was close to running. So I don’t know if I’m being fair by including it. Don’t expect to suddenly plop this into place on your 400Mhz Celeron running screen, because you’re going to need a lot more to get close. I won’t deny that I like the idea though, and if something comparable could be implemented in a text only environment, it might be worth trying. For my own part, I used to append long-running commands with aplay yoo-hoo.ogg so I would get an audible when something finished. So in that way, I can sympathize. But unless you use a lot of terminal commands on a Linux Mint desktop and need some sort of blinky reminder when one finishes … well, like I said, it will probably only appeal to a slim range of fans. screen’s “silencewait” command can be used for similar effect. That might be enough of a replacement in a no-graphics environment. Thanks! 🙂 Too much dependencies to have some damn notify-send executed when a bash hack gets triggered anytime a command takes too long to get executed. Why is that, in your opinion? Working only in some crazy code injection aside as it seems, it’s a very excellent idea. Should hack on it to have something which relies only on notify-send and a tmux lookalike! Nice to be there again, have a good life when you’re going to shut down business in Linux blogging, dear K. It’s a good ride, while it’s lasting. If you could come up with something that only relied on text-based dependencies, I think you’d have a hit. This is … clever … but like you said, pulls in way too much stuff to be useful outside a Gnome-esque desktop. Something more like the way notifyme behaves would be ideal. Thanks for all the comments over the years. It has been fun! 😀 Pingback: Links 26/4/2015: Debian 8, OpenMandriva Lx 3 Alpha, Mageia 5 RC | Techrights Pingback: ndn: And memories of the past | Inconsolation
OPCFW_CODE
This week's book giveaways are in the Jython/Python and Object-Oriented programming forums. We're giving away four copies each of Machine Learning for Business: Using Amazon SageMaker and Jupyter and Object Design Style Guide and have the authors on-line! See this thread and this one for details. Hi, 1.First you download J2me wireless toolkit lastet version 2.Also download latest version of CLDC/MIDP ..profiles.. Now install the wireless tookit ... To learn and test ur applicartion either tutorials in sun java webiste or check docs folder in ur j2me installation directory.. try to run the sample program...many helloworld programs are ther in tutorials also.. With j2me wireless toolkit u can test various mobile phone,pager based applications.. With CLDC/MIDP and all u can test ur code in PalmIII or V etc...and various handheld devices.... I think java micro editon products supprots most of the PDA's If you need further help reply back. Hi, You can also get Nokias Developer suite. This integrates with JBuilder from Borland and allows you to develop J2ME apps for Nokia phones. You can then test the app on Nokia devices and Sun emulators to see how it performs on different devices. Hope this helps.. Originally posted by Sridhar Raman: What different PDA's does j2me work at present. How can I start coding and testing in j2me. Some motorola phones support MIDP. The Palm supports MIDP and soon PDAP. WinCE devices support PersonalJava. Because the source code for MIDP is donloadable, you can port it to other platforms. JTAPI, Java TV APIs, etc exist in various appropriate prototype devices. posted 18 years ago how soon is soon, Mark? I thought it wasnt coming out for 18 months. chanoch<p><a href="http://www.amazon.com/exec/obidos/ASIN/1861007736/" target="_blank" rel="nofollow">Author of Professional Apache Tomcat</a></p> J2ME MIDP devices actually available, possibly not to Joe Public though: Motorola Accompli 008 (Europe & Asia only) Motorola i85 (US only?) Palm or compatible, with OS 3.5 or better possibly Linux PDAs like at www.agendacomputing.com Soon, supposedly: Siemens SL45i For a more comprehensive list, visit www.javamobiles.com Richard Taylor <br />Author of <a href="http://www.amazon.com/exec/obidos/ASIN/1861003897/ref=ase_electricporkchop" target="_blank" rel="nofollow">Professional Java Mobile Programming</a>
OPCFW_CODE
I’ve used the nvcc compiler to generate file1.obj with a host Launch function that calls the global Kernel function. I’ve added this file1.obj to a Visual Studio project file2. It compiles okay but complains with error LNK2005: main already defined in file2.obj What I want is for my program file2.exe to call the host Launch function in the separately nvcc compiled file1.obj, which then calls the GPU code Kernel function. Can I compile CUDA code without an entry point so that the main in my file2.exe is used instead of the one in file1.obj which it currently seems to run? My file2 project adds the cudart.lib but otherwise is not specially set-up for CUDA. What is the recommended way of linking and then calling CUDA code from another Visual Studio application? Thanks for any help, C/C++ require exactly one function called main() to be present at link time. You’ll get an error if there is none, and an error if there is more than one. CUDA is basically a subset of C++. So you would want to get rid of the second instance of main(). Thanks for the help! Okay, I’ve renamed the main in the file2 project to myProc so no more conflict. In the main in file1 I’ve used LoadLibrary to call myProc function in file2.dll. Now in myProc I want to call Launch function in file1 which runs the CUDA kernel. So in file1 __declspec(dllexport) void Launch(… In file2, I have this line near the top extern “C” __declspec(dllexport) void Launch(… and then in myProc I call Launch (the host function in file1 that calls the CUDA kernel) but it crashes with Unhandled exception at 0x00007FEDF89A2B7 (nvcuda.dll) in File1.exe: 0xC0000005: Access violation reading location 0x000000006065000. I want to do this because File1.exe is dynamic code that is compiled by nvcc whilst my application is running. All my code that does not change is in the dll. So in summary I want a function in File2.dll to run the CUDA code in File1.exe, not able to get this working. How do I do this? Thanks for any help, That looks like it would be difficult/probably require some kind of system call. I don’t think you want to do it that way. Having a function in f2.dll call a function in f3.dll should be possible. I would suggest that none of this stuff really has anything to do with CUDA. I think you’re stumbling around with dll mechanics. You might want to prototype something that doesn’t use CUDA just to get the mechanics sorted. Then when you want to add CUDA, there are plenty of writeups all over the place about how to build a CUDA-enabled dll. That was it in a nutshell. I just compile it as a CUDA project dll. Then I still use LoadLibrary to load the CUDA enabled dll from my exe, then call the function that has been the host function decorated with extern “C” __declspec(dllexport) that calls the CUDA kernel. So I guess you’re just asking the question “How do I create a CUDA dll?” There are various online examples of that. It’s all okay now. I’ve created the CUDA dll and everything works as expected. Thanks.
OPCFW_CODE
import datetime # 第三方类 from django.utils import timezone from django.db.models import Sum # 自己的类 from . import MetaClass from models import Order from utils.slack import send_slack_message from utils.decorators import promise_do_once class StatisticsJob(metaclass=MetaClass): next_time = timezone.localtime() @classmethod def start(cls): now = timezone.localtime() if now < cls.next_time: return # 隔天早上7点 tomorrow = (now + datetime.timedelta(days=1)).replace(hour=7, minute=0, second=0, microsecond=0) cls.next_time = tomorrow # start_time = timezone.localtime().replace(hour=0, minute=0, second=0, microsecond=0) end_time = start_time - datetime.timedelta(days=1) cls.doing(start_time=start_time, end_time=end_time) @classmethod @promise_do_once(file_name='statistics', func_name='doing') def doing(cls, start_time, end_time): # 累计昨天成交 status = Order.Status.PAID.value today_sum = Order.objects.filter( status=status, updated_at__gt=end_time, updated_at__lt=start_time ).aggregate(sum=Sum('total_fee'))['sum'] if not today_sum: today_sum = 0 # 累计所有成交 total_sum = Order.objects.filter(status=status).aggregate(sum=Sum('total_fee'))['sum'] if not total_sum: total_sum = 0 # 发送slack统计消息 text = f'昨天充值金额: {today_sum/100} 元, 历史累计充值金额: {total_sum/100} 元' send_slack_message(text=text)
STACK_EDU
June 11th 2020 Last year I found I had ADD. This was somewhat of a shock to me. I had always struggled with studying and doing repetitive tasks but had never really had any issues with school or university that I couldn’t workaround. Entering the workforce, I excelled at first, until I was in a role where there were no workarounds — I had to stay focussed. Long story short, at the age of 28 I was diagnosed. Ritalin to me felt like sobering up from the longest pub crawl of my life. Why were diagnostics so poor, and how many people like me are there suffering in silence? Being diagnosed it struck me how poor diagnostics were. And being on Ritalin, and working out the right amount of medication to take, it became so obvious how much machine learning could do for people like me. Are mental health diagnoses from psychiatrists and psychologists accurate generally. Fundamentally there are two things doctors do diagnostically: - Real Life Quizzes - Prod you with stuff and investigate Now imagine. If your phone could look at your text, your voice, your app usage. I could easily identify better than the current system. Are medical treatments, like Ritalin or SSRI’s, effectively measured for their efficacy in individuals? To some extent they are. Yet — How we use technology could add so much value here. So, unleashing my new found focusing powers I built a tool to measure how Ritalin was effecting my text patterns and my Mood. This was not particularly difficult to do. - A simple chrome extension to scrape my text from my messaging apps in real-time. - Applying sentiment analysis - Building a NLP model based off of Reddit comments of people with ADHD - More at my github https://github.com/Hewlbern/MoodTracker (this should get you started, most of my code I’ve kept private for now). I could easily see how Ritalin was effecting my mood and writing in general. Now — next step, I thought to myself, let’s release this to the public! So I posted it on reddit, actually getting a huge amount of interest. The response I got was very interesting. Now, this was from one post on Reddit… that got weirdly taken down by admins in a day. I had 5 people ask to work on it with me. Feeling validated, I started working out how this could really work. So — what does the future look like in this space? Well, it’s painfully clear that technology can make outcomes for mental health SO much better. Yet being able to execute the technology in a way that is legal, and doesn’t lead to other negative consequences? that’s another question. Here are some potential issues that became more obvious to me as I was doing this. To implement a tool like this requires a huge invasion of a persons privacy. Solid, essentially a new way of using personal data and sharing it, seems like the obvious way around this. Surely looking at an individual when diagnosing someone with a mental illness is the wrong approach altogether. A graph of the patient, with the people around the patient, is likely a far better diagnosis avenue. I can’t see how one can be diagnosed with say depression, without taking into account the people around them. Diagnosing a patient has many cascading effects under our current health system. While it is just data, the way it could be used in medical practice has the potential to create real moral hazards, and incredibly negative outcomes for patients.
OPCFW_CODE
class: center, middle best # From Master to Student .b[[@abelar_s](https://twitter.com/abelar_s) - [@ParisRB](https://twitter.com/parisrb) - [@RailsGirlsParis](https://twitter.com/railsgirlsparis)] [maitre-du-monde.fr](maitre-du-monde.fr) .contrib[[Contribute to this talk on GitHub!](https://github.com/abelards/abelards.github.com/edit/master/talks/master2student.html)] ??? --- class: center wide # Zen proverbs When an ordinary man attains knowledge, he is a sage; when a sage attains understanding, he is an ordinary man. In the beginner's mind there are many possibilities, in the expert's mind there are few. --- class: left best # The Full Circle .b[Dreyfus Model of Skill Acquisition] This is per skill, not per person! .b[Don't be Fixed Mindset], look for lessons everywhere. .b[T-shaped skills], breadth + depth (or tech + social). --- class: left # The Full Circle Color codes I'll use during the talk: * .l0[novice] * .l1[advanced beginner] * .l2[competent] * .l3[proficient] * .l4[expert] Then back to novice... at other things :) ??? How Dreyfus' skill model applies in some code things. It's quite close to the Four Stages of competence: - .l0[unconscious incompetence] - .l1[conscious incompetence] - .l23[conscious competence] - .l4[~ unconscious competence] Also .l1[Shu].l23[Ha].l4[Ri] https://en.wikipedia.org/wiki/Shuhari https://www.oreilly.com/ideas/the-traits-of-a-proficient-programmer --- class: left best # Expectations Also why I love newbies <3 * .l1[10x: life-changing] * .l2[2x: life-improving] * .l3[1.1x (+10%): incremental] * .l4[10x: life-changing] ... but not via code :) --- class: left # Bullshit tolerance It's so easy to become a grumpy hater :( * .l1[anything: don't know] * .l2[non-obvious: confused? Clarify] * .l3[very little: attitude is key!] * .l4[anything: better hear small signals] Anything you hear has some degree of truth for someone. Accept reality like a Zen buddhist. ??? --- class: left best # What I want, what I do Want / Can do & work given * .l0[Vision / none] * .l1[Strategy / Ops] * .l2[Tactics / Tactics] * .l3[Ops / Strategy] * .l4[Vision / everything] .l2 to .l3 is hard! You like where you are! .l3 to .l4 is hard! You must change vision! ??? --- class: left # Project maturity Once again about what makes sense * .l0[tutorial or fear] * .l1[fills in blanks] * .l23[rewrites from scratch] * .l23[painfully adds up to legacy] * .l4[refactors knowledge] ... and back to address fears (of others) and (writing) tutorials. --- class: left # Names Names withheld to protect the culprits - .l0 `categories` - .l1 `dashboard_categories_counter` - .l2 `x` - .l3 `counts` - .l4 `categories_counts` ... and perhaps back to `dasbhoard_categories_counter` :) ??? --- class: left best # Names Advice you can give - .l0[I just want to stop crashes] - .l1[explicit names] - .l2[short names] - .l3[explicit & as short as possible] - .l4[self-explanatory names] not to people of some level, but go through all levels every single time. --- class: left # DRY Don't Repeat Yourself: results * .l0[I don't get it: Copy-and-Paste] * .l1[I know I should, and I can't do it] * .l2[Factors functions, reuses code] * .l3[Factors structures, reuses classes] * .l4[Back to Copy-and-Paste?] Single Source of Truth matters more. --- class: left best # DRY Don't Repeat Yourself: focus on * .l0[Not My Problem] * .l1[Code] * .l2[Data] * .l3[Architecture] * .l4[Pragmatism] "Duplication is far cheaper than the wrong abstraction" -- @sandimetz --- class: left # Clever Code What is clever code anyway? - .l0[these people hate me] - .l1[so clever! the dev must be smart] - .l23[I do clever code & I feel clever] - .l23[my mess is clean, yours is ugly] - .l4[this is so needlessly convoluted] The Master does not think the dev IS stupid. The Master thinks the dev HAS BEEN less than clever HERE. --- class: left best # Become a mentor! You will learn even more! * .l0[write checklists] * .l1[code reviews] * .l2[deliberate practice] * .l3[architecture katas] * .l4[train teams, leverage business] What works will have to scale. Congrats! --- class: left best # Bonus: StackOverflow How to use code from the Internet - .l0[read code] - .l1[make an hypothesis: how it works] - .l2[why, with what does it work?] - .l3[check & learn, rinse & repeat] - .l4[read code: yours, OSS, SO] And here's the full circle :) --- class: left # Crafts-wo-manship Now be a proud developer! - .l0[I can code] - .l1[my code works] - .l2[my code is elegant] - .l3[my code is simple] - .l4[my code is not the problem] ... but you can leverage code to solve problems, and that's awesome. ??? --- class: left best # Priorities Do everything for a reason. - .l0[make it] - .l1[make it work] - .l23[make it fast] - .l23[make it beautiful] - .l4[make it readable] It will either make it fast & clean, or help it on the right track. --- class: left # Career management Show up, good attitude, good context. - .l0[it will be hard, I will stick with it] - .l1[team with many tasks & coaching] - .l2[join a team with mentoring & peers] - .l3[make sure you keep growing] - .l4[begin anew] Most companies can be changed from within. Change management, theory of orgs, social dynamics... are very interesting skills to have. --- class: center, middle, happy # Questions? .b[[@abelar_s](https://twitter.com/abelar_s) - [@ParisRB](https://twitter.com/parisrb) - [@RailsGirlsParis](https://twitter.com/railsgirlsparis)] [maitre-du-monde.fr](maitre-du-monde.fr) .contrib[[Contribute to this talk on GitHub!](https://github.com/abelards/abelards.github.com/edit/master/talks/master2student.html)]
OPCFW_CODE
Last post Jun 27, 2012 04:58 AM by tahazubairahmed Jun 21, 2012 05:35 AM|tahazubairahmed|LINK i want to call the NET TCP service in browsers (Exmaple: net.tcp://localhost/Services/Service.svc) After this i want to call this service at client end I am successfully upload the service in IIS using (works with BasicHTTPBinding but not work with NETTCPBinding) Further i am also set the following things but unable to browse the net.tcp service on browser here is a service config file <service name="Service" behaviorConfiguration="MyBehav"> <!--<endpoint name="WCFHTTPBinding" address="" contract="IService" binding="basicHttpBinding" bindingConfiguration="portSharingBinding" />--> <endpoint name="WCFNETTCPBinding" address="" contract="IService" binding="netTcpBinding" bindingConfiguration="portSharingBinding"/> <add baseAddress="net.tcp://localhost:808/Services/Service.svc" /> <binding name="portSharingBinding" portSharingEnabled="true"/> Jun 21, 2012 05:56 AM|Mudasir.Khan|LINK in IIS when you click on the application does it show you the link to browse with net.tcp ? meaning its configured successfully . whats the error you are getting ? Jun 21, 2012 06:22 AM|tahazubairahmed|LINK It browse only when i set When i use basicHttpBinding binding in endpoint instead of netTcpBinding While if i Add / Change the binding name basicHttpBinding to netTcpBinding it gives me error. Could not find a base address that matches scheme net.tcp for the endpoint with binding NetTcpBinding. Registered base address schemes are [http]. Further if i also change the baseAddress="net.tcp://localhost/" it remains give me the same error Note: All the changes i made in Service Config file Jun 26, 2012 05:15 PM|Syed Aoun Ali Naqvi|LINK when You calling the service references is there any error??? or it just show error after upload in IIS??? Jun 27, 2012 04:58 AM|tahazubairahmed|LINK Thanks to all of your reply Finally i resolve my problem i.e. Enable the Protocols in IIS http,net.tcp must be set in MyWebName instead of Default Website Advance settings
OPCFW_CODE
Why define a function in a source file, then declare it in another before using its reference? I found the following pattern in the DOOM source and I'm not sure what to make of it. The definition .c file The declaration and use .c file enemy.c // A nice definition, but there is no corresponding header file, kind of weird void A_Pain (mobj_t* actor) { if (actor->info->painsound) S_StartSound (actor, actor->info->painsound); } info.c // Now it's redeclared, but without a parameter? void A_Pain(); // ... state_t states[NUMSTATES] = { // ... {SPR_PLAY,6,4,{A_Pain},S_PLAY,0,0}, // S_PLAY_PAIN2 // ... } Why wouldn't a header file be used, when the rest of the code base uses them? Is there an advantage to this method? Why declare the functions in a different source file with different signatures? There were days, when you couldn't specify parameters in function declaration. The most likely explanation is that this is a mistake that wasn't caught in review. As long as the code calling A_Pain passes the right argument type, there won't be any runtime issues. Depending on the compiler and the warning levels, no diagnostic may have been issued, or it may have been "just" a warning which was ignored. @KamilCuk: Doom is old, but not that old. This is not good practice I think. But based on my understanding of the C standard, when you declare a function like void A_Pain(); basically that function can take any number of parameters, which is different from C++. So in C you need to use void A_Pain(void); to say it has no parameter. But in C++, void A_Pain(); means that it takes no parameters. The rest of the program is written with headers, except this part. Is there any conceivable advantage to this approach? One advantage I could think of is that if the signature of A_Pain changed, you just need to change enemy.C but you don' t need to change info.C. That being said that you still need to recompile both. @MarvinIrwin you would expect consistency inside a codebase, but you would be wrong most of the time :) In C, the declaration T f() does not declare a function f returning T and taking zero parameters; it declares a function f returning T and accepting an unspecified number of parameters. You can put function declarations in a source file even if the function definition is in a translation unit that your source file is not aware of, since joining it all together is done by the linker, not the compiler. Why wouldn't a header file be used? It doesn't make a real difference, header files are just textual inclusion anyway. It makes sense to put declarations in header files, especially if they're used in several places, but it's not required. As long as every compilation unit agrees on what the name means, things will work fine. Why declare the functions in a different source file with different signatures? They don't have different signatures; if they did, there would be a problem. But the declaration in info.c has no parameter list. This is allowed as long as "the parameter list [in the definition] shall not have an ellipsis terminator and the type of each parameter shall be compatible with the type that results from the default argument promotions" (N1256 <IP_ADDRESS>, which is probably the wrong version of the spec for application to Doom, but it doesn't really matter). In other words, if a function is declared without a signature, the number and types of its parameters will be inferred from how it's called. As long as this inference is correct, you have a valid program that will work correctly. It's just old-fashioned, and because of the lack of explicitness, a bit harder to maintain.
STACK_EXCHANGE
Within a few weeks of starting up a new blog, I'm already running out of problems to write about. So, in the increasingly popular meta fashion, this post is about my problem of coming up with new content. Coming up with content is not easy for me. Most of my posts tend to be around 1000 to 2000 words long, although some stories do end up longer, and rants have been as long as 40k before. Even though I have hundreds of different topics I would certainly LIKE to post about, the problem arises when trying to come up with something that is 1) not done to death by every other blogger, and 2) interesting and deep enough to write 1000 words on 3) without needing to spend weeks going down a rabbit hole of conspiracies (like my recent drafts, most of which will never see the light of day). Some posts, like an upcoming one on Greenland of all things, I expect to be significantly longer, but these have the issue of retention: to get a proper expression of perspective on a fairly specific topic, you need a lot more words, but too many words leads to boredom, and that's something I try to avoid here (If you want to pay money to be bored, go to a private school). I'm not even sure how I'm going to drag this out to a thousand words because, frankly, most things just aren't interesting enough to write that much about. Even this topic has been done to death by YouTubers and Streamers with access to GPT-3's content generation algorithm, so this post breaks all three of the fundamental rules. But the question becomes "why are those three things the criteria for what makes it to publication?". Fear not, an explanation is below. Rule 1 is simple enough to understand: site rankings. A website like Wikipedia, BBC, or a blog with a larger userbase, will rank higher given the exact same content and text, simply because they are considered by search engines to be "better" sites (i.e. more reputable). By picking topics that are less frequently explored, or more controversial, it's almost guaranteed that a big media outlet won't spend the effort to cover it. On the other hand, I don't make money off this blog anyway, so it doesn't matter how much time or energy I expend doing research for a post. As a result, it's better for me to target more niche or obscure topics which I have an interest (fixation) in, because I get better rankings and let's be honest, the only reason I have online platforms is to get validation from strangers. Rule 2 is actually for a very similar reason to rule 1. If a topic and perspective can be expressed in less than 280 characters, the top 20 results will all be from Twitter, Facebook, or any of the other major online social media platforms, because they have the CTR (click-through rate) and userbase for people to look for them. Anything that can be written about in 50-500 words is going to be a multi-post tweet, or a comments discussion, or a post on a news aggregation site like ArsTechnica or Reddit, or any of the billion other sites on the internet. My target space is that 800-1200 word void in which I can express a perspective on an issue while also being able to "objectively" explain the context and background of the issue, as well as my biases on the subject. Rule 3 ties into this: longer than 1200 words and you get into academic territory. I don't claim to be an expert on any of the topics I write about, most of my research is a few Google searches, some YouTube videos, and maybe a look through Google Scholar or JStor if I deem it important. Anyways, I can't drag this out any further, so I'm gonna cut it off at 700 words, one hundred short of the minimum and three hundred short of the target. Don't forget to like and subscribe.
OPCFW_CODE
// Honeycomb, Copyright (C) 2013 Daniel Carter. Distributed under the Boost Software License v1.0. #pragma once #include "Honey/Core/Meta.h" namespace honey { /// Class to hold compile-time finite rational numbers, ie. the fraction num / den. template<int64 Num, int64 Den = 1> struct Ratio { private: friend struct Ratio; template<class rhs> struct lessImpl; public: static_assert(Den != 0, "Denominator can't be 0"); static const int64 num = Num * mt::sign<Den>::value / mt::gcd<Num,Den>::value; static const int64 den = mt::abs<Den>::value / mt::gcd<Num,Den>::value; /// operator+ template<class rhs> struct add { private: static const int64 gcd1 = mt::gcd<den, rhs::den>::value; static const int64 n = num * (rhs::den / gcd1) + rhs::num * (den / gcd1); static const int64 gcd2 = mt::gcd<n, gcd1>::value; public: typedef Ratio<n / gcd2, (den / gcd2) * (rhs::den / gcd1)> type; }; /// operator- template<class rhs> struct sub { typedef typename add<Ratio<-rhs::num, rhs::den>>::type type; }; /// operator* template<class rhs> struct mul { private: static const int64 gcd1 = mt::gcd<num, rhs::den>::value; static const int64 gcd2 = mt::gcd<rhs::num, den>::value; public: typedef Ratio<(num / gcd1) * (rhs::num / gcd2), (den / gcd2) * (rhs::den / gcd1)> type; }; /// operator/ template<class rhs> struct div { static_assert(rhs::num != 0, "Divide by 0"); typedef typename mul<Ratio<rhs::den, rhs::num>>::type type; }; /// operator== template<class rhs> struct equal : mt::Value<bool, num == rhs::num && den == rhs::den> {}; /// operator!= template<class rhs> struct notEqual : mt::Value<bool, !equal<rhs>::value> {}; /// operator< template<class rhs> struct less : mt::Value<bool, lessImpl<rhs>::value> {}; /// operator<= template<class rhs> struct lessEqual : mt::Value<bool, !rhs::template less<Ratio>::value> {}; /// operator> template<class rhs> struct greater : mt::Value<bool, rhs::template less<Ratio>::value> {}; /// operator>= template<class rhs> struct greaterEqual : mt::Value<bool, !less<rhs>::value> {}; /// Get common ratio between this type and rhs template<class rhs> struct common { private: static const int64 gcdNum = mt::gcd<num, rhs::num>::value; static const int64 gcdDen = mt::gcd<den, rhs::den>::value; public: typedef Ratio<gcdNum, (den / gcdDen) * rhs::den> type; }; private: template<class lhs, class rhs> struct lessCmpFrac; template< class rhs, int64 q1 = num / den, int64 q2 = rhs::num / rhs::den, bool eq = q1 == q2> struct lessCmpWhole; /// Compare the signs of the ratios. Default case both ratios are positive, do whole test. template< class rhs, bool = (num == 0 || rhs::num == 0 || (mt::sign<num>::value != mt::sign<rhs::num>::value)), bool = (mt::sign<num>::value == -1 && mt::sign<rhs::num>::value == -1)> struct lessCmpSign : lessCmpWhole<rhs> {}; /// One ratio is negative, trivial comparison template<class rhs> struct lessCmpSign<rhs, true, false> : mt::Value<bool, (num < rhs::num)> {}; /// Both ratios are negative, test positive wholes template<class rhs> struct lessCmpSign<rhs, false, true> : Ratio<-rhs::num, rhs::den>:: template lessCmpWhole< Ratio<-num, den> > {}; /// Private implementation template<class rhs> struct lessImpl : lessCmpSign<rhs> {}; /// Compare the whole parts. Default case they are equal, compare the fractional parts template<class rhs, int64 q1, int64 q2, bool eq> struct lessCmpWhole : lessCmpFrac< Ratio<num % den, den>, Ratio<rhs::num % rhs::den, rhs::den> > {}; /// Whole parts not equal, trivial comparison template<class rhs, int64 q1, int64 q2> struct lessCmpWhole<rhs, q1, q2, false> : mt::Value<bool, (q1 < q2)> {}; /// Test fractional parts. Make fractional whole by inverting, then do recursive whole test Den2/Num2 < Den1/Num1 template<class lhs, class rhs> struct lessCmpFrac : Ratio<rhs::den, rhs::num>:: template lessCmpWhole< Ratio<lhs::den, lhs::num> > {}; /// Fractional recursion end, Num1 != 0, Num2 == 0 template<class lhs, int64 Den2> struct lessCmpFrac<lhs, Ratio<0, Den2>> : mt::Value<bool, false> {}; /// Fractional recursion end, Num1 == 0, Num2 != 0 template<int64 Den1, class rhs> struct lessCmpFrac<Ratio<0, Den1>, rhs> : mt::Value<bool, true> {}; /// Fractional recursion end, Num1 == 0, Num2 == 0 template<int64 Den1, int64 Den2> struct lessCmpFrac<Ratio<0, Den1>, Ratio<0, Den2>> : mt::Value<bool, false> {}; }; /// Ratio types namespace ratio { typedef Ratio<1, 1000000000000000000> Atto; typedef Ratio<1, 1000000000000000> Femto; typedef Ratio<1, 1000000000000> Pico; typedef Ratio<1, 1000000000> Nano; typedef Ratio<1, 1000000> Micro; typedef Ratio<1, 1000> Milli; typedef Ratio<1, 100> Centi; typedef Ratio<1, 10> Deci; typedef Ratio<1, 1> Unit; typedef Ratio<10, 1> Deca; typedef Ratio<100, 1> Hecto; typedef Ratio<1000, 1> Kilo; typedef Ratio<1000000, 1> Mega; typedef Ratio<1000000000, 1> Giga; typedef Ratio<1000000000000, 1> Tera; typedef Ratio<1000000000000000, 1> Peta; typedef Ratio<1000000000000000000, 1> Exa; } }
STACK_EDU
package main //import "C" import ( "runtime" "time" "github.com/veandco/go-sdl2/sdl" img "github.com/veandco/go-sdl2/sdl_image" mix "github.com/veandco/go-sdl2/sdl_mixer" ttf "github.com/veandco/go-sdl2/sdl_ttf" ) const ( StateRun = iota StateFlap StateDead ) // Game // base game class modeled after Microsoft.Xna.Game type Game struct { Title string Width int Height int Window *sdl.Window Renderer *sdl.Renderer Delta float64 Mode int running bool err error } // IGame // provide implementation for Game subclass type IGame interface { Initialize() LoadContent() Update(delta float64) Draw(delta float64) } // Running // check if loop is running func (this *Game) Running() bool { return this.running } // Start the main loop func (this *Game) Start() { this.running = true } // Quit Quits main loop func (this *Game) Quit() { this.running = false } // Initialize the game func (this *Game) Initialize() { // runtime.LockOSThread() this.err = sdl.Init(sdl.INIT_VIDEO | sdl.INIT_AUDIO) if this.err != nil { sdl.LogError(sdl.LOG_CATEGORY_APPLICATION, "Init: %s\n", this.err) return } //mix.INIT_OGG this.err = mix.Init(this.Mode) if this.err != nil { return } this.err = mix.OpenAudio(mix.DEFAULT_FREQUENCY, mix.DEFAULT_FORMAT, mix.DEFAULT_CHANNELS, 3072) if this.err != nil { return } this.err = ttf.Init() if this.err != nil { return } this.Window, this.err = sdl.CreateWindow(this.Title, sdl.WINDOWPOS_UNDEFINED, sdl.WINDOWPOS_UNDEFINED, this.Width, this.Height, sdl.WINDOW_SHOWN) if this.err != nil { return } this.Renderer, this.err = sdl.CreateRenderer(this.Window, -1, sdl.RENDERER_ACCELERATED|sdl.RENDERER_PRESENTVSYNC) if this.err != nil { return } } // Destroy the game func (this *Game) Destroy() { this.Renderer.Destroy() this.Window.Destroy() sdl.Quit() } // Run the game loop // Injects the subclass implementation func (this *Game) Run(subclass IGame) { var lastTime float64 var curTime float64 subclass.Initialize() subclass.LoadContent() lastTime = float64(time.Now().UnixNano()) / 1000000.0 for this.Running() { switch sdl.PollEvent().(type) { case *sdl.QuitEvent: this.Quit() } curTime = float64(time.Now().UnixNano()) / 1000000.0 this.Delta = curTime - lastTime lastTime = curTime subclass.Update(this.Delta) runtime.GC() subclass.Draw(this.Delta) } } // LoadTexture from path func (this *Game) LoadTexture(path string) (texture *sdl.Texture) { var err error texture, err = img.LoadTexture(this.Renderer, path) if err != nil { sdl.LogError(sdl.LOG_CATEGORY_APPLICATION, "Load Texture: %s\n", err) } return } // LoadFont from path func (this *Game) LoadFont(path string, size int) (font *ttf.Font) { var err error font, err = ttf.OpenFont(path, size) if err != nil { sdl.LogError(sdl.LOG_CATEGORY_APPLICATION, "Load Font: %s\n", err) } return } // LoadMusic from path func (this *Game) LoadMusic(path string) (music *mix.Music) { var err error music, err = mix.LoadMUS(path) if err != nil { sdl.LogError(sdl.LOG_CATEGORY_APPLICATION, "Load Music: %s\n", err) } return } // LoadSound from path func (this *Game) LoadSound(path string) (sound *mix.Chunk) { var err error sound, err = mix.LoadWAV(path) if err != nil { sdl.LogError(sdl.LOG_CATEGORY_APPLICATION, "Load Sound: %s\n", err) } return }
STACK_EDU
Rachel's Guide to Exams Planning is everything (that is why I'm posting this so ahead of time to stress the importance of planning). Exams can be very overwhelming. What works for me is to take a step back to figure out everything that is going to be on each exam and then break that load down. This means taking the material and breaking it down into units. I do this because it makes the huge load of material seem less overwhelming. I also like to use exam calculators. Here is an example of one: https://rogerhub.com/final-grade-calculator/. It has you type in your current grade, your percent weight of your exam, and your goal ending grade. Then, it will tell you the lowest grade you can get to end with your goal grade. I do this for all of my classes to get an outline of what I can do to reach my goal grades and which classes to prioritize when studying. I also really take advantage of the weekends before exams. I like to make a schedule of what classes I'm going to work on at what time. I also, on top of my schedule, like make a specific list of goals (what work for each class) I want to get done that day. I add in times for breaks too. I usually dislike schedules for work and have a hard time following them. However, the more specific I make them, the more effective they are for me. Overall, planning and working ahead is key to acing exams To study for exams, I alter my notes a little. Instead of adding new notes on the same material into my journals, I write in-depth notes for all my classes on copy paper. Over Thanksgiving break, I started to do a little exam prep and studying. It is such a rewarding feeling to finish one page of notes and lay it aside to start another. By the time exams come, I will have huge piles of every topic and unit covered. The thing about exams is that you already know the basics of the information, but you need to refresh yourself on the depth of the information you used to be able to recall it. I get to this point by rewriting the information in my own words which helps me make sense of the information. I like to use the notes I took to study for the unit originally as a guide. However, I like to rewrite my notes when studying for exams like I'm studying for a regular test, by going to the specific sources I used such as textbooks, teacher handouts, and etc. I want to end this blog post on with some final thoughts. When I take a test, my goal is that I know the material so well and to such depth that I would be able to teach people who know nothing about the material. In order to do this, you have to figure out what means will get you there. This means how will you study and what time will you put in. Exams are no different even though the means to get to the specific goal may be more time-consuming. I also like to think that failure is a great learning opportunity. Now, I know that writing out fun notes is the best study strategy that works for me. However, it took me trying and not getting the results I wanted with other strategies to get there. For me, the first nine weeks of school is about figuring out what work you need to do to succeed in each teachers' class. In order to truly succeed, you need to take what you did wrong and alter your thinking and work for it until it is right. If your not getting the grades you want, by studying the same way you won't give achieve the grades you want. This is a good way to think especially as exams are coming back. Take the time to look back on old test grades and wonder why did you do that bad on that test and good on the other. Reflecting on what works and what doesn't may be just what you need to ace your exams. Thanks for reading! Can't wait for you to ace your exams!
OPCFW_CODE
In the ever-evolving landscape of biotechnology, where innovation knows no bounds, the fusion of artificial intelligence (AI) and virtual reality (VR) promises to be a game-changer. As technology enthusiasts and industry insiders eagerly await the dawn of this new era, it’s crucial to understand the pivotal role AI will play in shaping VR applications in the years to come. In this exploration, we delve into the essence of artificial intelligence, distinguish it from algorithms, and draw distinctions with text-based Large Language Models (LLMs). The Essence of Artificial Intelligence At its core, artificial intelligence refers to the development of computer systems capable of performing tasks that typically require human input. These tasks encompass a wide spectrum of activities, ranging from speech recognition and visual perception to problem-solving and language comprehension. AI operates on the premise of machine learning, which allows systems to learn from data and improve their performance over time. Machine learning can be further subdivided into the methodologies used for training, such as reinforcement learning (a method based on rewarding desired behaviors and punishing undesired ones) or supervised learning (when a machine predicts an output based on its learning from known datasets, such as a set of training examples). AI’s integration into VR applications is poised to revolutionize the biotech sector by enhancing our capacity to simulate, analyze, and manipulate biological systems (for reading on AI’s impact on Augmented Reality, see last week’s post here). This transformation will empower scientists, researchers, and medical professionals to explore intricate biological processes, model complex molecular interactions, and develop novel treatments for diseases. The synergy between AI and VR is set to unlock new frontiers in biotech, as they offer an immersive and interactive platform for visualizing and interacting with biological data. AI Algorithms vs. Text-Based LLMs Before we dive deeper into the marriage of AI and VR, let’s first distinguish between AI algorithms and text-based Large Language Models (LLMs). While both are components of artificial intelligence, they serve very distinct purposes. AI Algorithms: These are computational routines designed to solve specific problems or tasks. For example, an algorithm may be created to identify patterns in genetic sequences, classify cells in a microscope image, or predict the potential side effects of a new drug. AI algorithms rely on data-driven processes and are tailored for domain-specific tasks, making them invaluable tools in biotechnology. Text-Based LLMs: On the other hand, LLMs like GPT-4, are language models that excel in natural language understanding and generation. They analyze vast amounts of text data to learn the patterns, context, and semantics of human language. LLMs can be used for various applications, including chatbots, content generation, and language translation, but their primary strength lies in processing and generating text. While chatbots might seem clunky and obvious to us now, the latest generation of AI-driven NPC (non-player characters) integrated into virtual experiences are proving increasingly difficult to detect. An Israeli startup ran an online version of the famous Turing Test (a test named after computer scientist Alan Turing that challenges participants to judge whether they’re talking to a real human or not) scraping data from over 15 million (!) conversations, see the results – and try the test out – for yourself here. In the context of AI-enhanced VR applications in biotech, AI algorithms take center stage. These algorithms are custom-built to tackle the unique challenges of biotechnology, such as protein folding prediction (check out the amazing work being done on project AlphaFold at DeepMind), drug discovery, and genome analysis. They harness the power of AI to extract meaningful insights from biological data, offering precision and efficiency that generic LLMs cannot match. AI-Enhanced VR in Biotech: A Glimpse into the Future The convergence of AI algorithms and VR technology holds immense promise for the biotech sector. Here’s how these two forces will come together to shape the future: - Immersive Drug Discovery: AI-driven VR environments will enable researchers to explore and interact with molecular structures, facilitating drug discovery by visualizing potential interactions and predicting drug efficacy more accurately. - Medical Training: Healthcare professionals will benefit from realistic VR simulations powered by AI algorithms. Surgeons, for instance, can practice complex procedures in a risk-free environment, honing their skills and reducing medical errors. - Data Visualization: AI-enhanced VR will transform complex biological datasets into immersive visualizations. Researchers can navigate through intricate genetic networks and gain a deeper understanding of genetic variations linked to diseases. - Collaborative Research: VR-powered collaboration platforms will enable scientists from around the world to work together in virtual laboratories, accelerating the pace of discovery and innovation. - Patient Engagement: Virtual reality experiences enhanced by AI will aid in patient education and engagement. Patients can explore their own health data in a more accessible and comprehensible format. In conclusion, the marriage of artificial intelligence and virtual reality represents a compelling frontier in biotech. As AI algorithms continue to evolve and integrate seamlessly with VR technology, we can anticipate breakthroughs that will reshape the way we understand, study, and manipulate the biological world. While text-based LLMs like Chat-GPT excel in language processing, it is AI algorithms that will drive the transformation of biotechnology by powering immersive, data-rich VR experiences that hold the potential to revolutionize medicine, research, and education in the years to come.
OPCFW_CODE
Home Screen Install utility - There are many home screen themes on the net that you can freely download or purchase. Installing on to your phone normally may require a 'file explorer'. Our utility searches out the installers on your phone, lists them, and with a push of a soft key, install them. 1) Download and install our free Home Screen Install utility. When you run this utility on your phone, it will search out and list all uninstalled home screens ".hme". 2a) Copy the ".hme" file across to the SmartPhone. Place the ".hme" file in IPSMMy Documents of your Smartphone 2b) Using the Home Screen Install on your phone, select the .hme file you wish to install. Everything will be installed and your home screen should be active immediately. Like it? Share with your friends! If you got an error while installing Themes, Software or Games, please, read FAQ. Supported operating systems: Windows Mobile 2003 Smartphone, Windows Mobile 5.0 Smartphone, Windows Mobile 6 Standard, Windows Mobile 6.1 Standard, Windows Mobile 6.5 Standard StyleWM StyleWM is a program for creating HomeThemes for SmartPhone. -You can create your own home layout like you want WorldTime WorldTime 5.0 displays the current date and time including daylight time saving in 58,000 locations across the world. Designed to run on Windows Mobile 5.x & 6.x devices, supporting all resolutions. The finger friendly UI enables easy navigation on touch devices, and button navigation on non touch devices SmartScheme SmartScheme is a color scheme maker and color scheme editor that you can use directly use on your Smartphone SP 2003 devices. Choosing a specific color is simplified via a built-in palette, or advanced users can just specify the RGB components of the colors they want to use Homescreen Designer 2007 Homescreen Designer 2007 for WM 5.0/WM6.0 for Smartphone 1. What is Homescreen Designer? With Homescreen Designer, you can design your own powerful Homescreens for Smartphones without any XML-Knowlegde. It's pure What You See Is What You Get! Your browser does not support inline frames Theme Changer Theme Changer- A free utility to change themes with system screen common in .cab themes. Also allows browsing backgrounds and changing backgrounds automaticly Other Software by developer «Golden Crater Software»: Doberman BMS Doberman Security and Automation has, as the name suggests, two major parts of functionality Tiny eBook Reader Tiny eBook Reader - Install and remove from your device directly from Desktop reader+ Tiny eBook Reader reads books in several formats, is lightning fast and highly configurable and can read books of any size
OPCFW_CODE
What to do if my paper is incorrectly cited in a journal? Recently I got a cite in a research paper where the author cited my work as [first name] et al, where it should have been [last name] et al. Is there any problem for indexing purpose? should I contact the journal editor about this incident and ask for correction? How is the paper referred to in the bibliography? If it is wrong there, you should contact the editor as most indexing services won't pick it up. You should also avoid publishing in that journal as the referees don't seem to verify the references. Yes, that's a problem, and you will absolutely want to fix it. The problem is that it's actually a two-stage problem to correct: First, you will need to contact the journal to fix the citation. Then, after the citation has been corrected in the article, you'll need to submit a correction request to the various citation trackers (such as ISI and Scopus). They will need to fix your citation in their database, if the article has already been entered. Note: It may or may not be possible to correct the journal—the editors may or may not be willing to issue a correction to fix a reference. However, it may be possible to correct the reference with the citation indices, even if it's not correct in the journal. However, the road will be tougher; you'll need to show that the paper that should be cited is indisputably yours. Can you do that? Maybe this is field dependent, but I've never heard of anyone correcting a error like this in a citation in a math paper that has already been published. (My feeling is that you cannot change a published paper at all without issuing a formal correction or retraction, and while an error of this sort would be unfortunate, it wouldn't be sufficiently serious to justify publishing a correction, since the bibliography would still unambiguously identify the paper being cited.) But maybe it happens in mathematics and I'm just not aware of it. It is possible to correct a citation in the citation indices; it may or may not be possible to edit the paper, as you've cited. However, if there's a clear clerical error, it was incumbent on the journal to catch and correct it before publication occurred. I can attest that it is possible to correct citations in the online indices, because I've done it myself. I did something similar when a paper cited a preprint of mine. I contacted MathSciNet and told them where the paper had appeared and asked them to update their reference. I was pleasantly surprised by how quick and painless the process was. That reminds me--a well-known Handbook incorrectly cites a paper of mine and misspells my name. I had not thought of requesting a correction in the online indices. Mmh if it's just the in-text citation that is wrong but the entry in the reference section, and more importantly in the article metat data, is correct then it's just a minor annoyance. In addition to the earlier answer, you may also want to email the authors of the paper and inform them of their error, providing the correct bibliographic information, to prevent the problem in the future.
STACK_EXCHANGE
Join GitHub today GitHub is home to over 50 million developers working together to host and review code, manage projects, and build software together.Sign up GitHub is where the world builds software Millions of developers and companies build, ship, and maintain their software on GitHub — the largest and most advanced development platform in the world. Triad decompiler version 0.4 Alpha Test. Not intended to be used for copyright infringement or other illegal activities. What is triad: TRiad Is A Decompiler Triad is a tiny, free and open source, Capstone based x86 decompiler that will take in ELF files as input and spit out pseudo-C. Installation: Triad requires Capstone to be installed first. http://www.capstone-engine.org/ For 32 bit tests, gcc-multilib is also required. First, it will be necessary to build triad. "make triad" should suffice. After its components are built, the triad binary will be placed in the build directory. To copy the binary into /usr/bin, simply use "sudo make install." Usage: triad <flags> <file name> <(optional)start address> <(optional) cutoff address> Simply run the triad binary from the command line and specify an ELF to decompile as a parameter. By default, triad will try to find the main function of the given file and start decompiling from there. Sometimes ELFs have all symbols stripped, so triad will be unable to find main. In such a scenario, the user may simply specify a starting address as the second command line parameter. But, an incorrect starting address will likely result in incorrect decompilation or no decompilation. Occasionally it is ambiguous as to where a function actually ends. If a user thinks he/she knows better where a particular function ends than triad and has specified a start address, he/she can specify a cutoff address. The default cutoff address is the end of the segment containing the entry point. Triad has the ability to follow function calls and automatically decompile callees. This is especially helpful when dealing with stripped binaries or other binaries in which relevant code isn't clearly distinguishable from data. Flags: -f: Full decompilation. This is the default. -p: Partial decompilation. Recovered control flow is always going to be bad, so Triad has an option to only partially decompile code. This means Triad will identify variables and parameters, try to recover calling convention, and translate most instructions back into their C operator equivalents, but Triad will leave jumps and comparisons as is with the philosophy that the user knows best how to follow them. -d: Disassemble. Make no attempt to decompile code, simply print out a disassembly in AT&T syntax. -s: Disable call following, just decompile main/whatever code was at the specified address. -h: Print all constants in hexadecimal format. Limitations PLEASE READ BEFORE SUBMITTING A BUG REPORT: Triad really only works on x86 and x86_64 ELF executables. Other architectures may be possible in the future, but there are currently no plans to add them. The triad decompiler is still very much an alpha. The project is nowhere near completion and as such is missing some critical features, contains numerous bugs, has several odd quirks, and has a propensity for segfaulting. Missing features include support for switch decompilation and full support for strings and statically allocated arrays (dynamically allocated arrays will actually probably work to one degree or another, but the syntax will be most unusual e.g. *(char*)(eax + (12)) = 96 instead of array = 'a'). Struct analysis will be a long ways a way as well, and unions may never work properly. The only supported binary format currently supported is the Executable and Linkable Format (ELF), commonly used on UNIX like systems, such as LINUX. Control flow decompilation should be mostly correct, but it may look funky. Continues, and forward gotos inside of conditional statements might wind up as if-else statements. This is actually semantically equivalent, just different from original source. Optimization and computed jumps will probably cause a program to be decompiled completely incorrectly. Triad was designed and tested for programs compiled using gcc. It is important to understand that the generated source code will NEVER be exactly the original source (unless the program was compiled with debug symbols, of course). If triad segfaults on you, feel free to tell me. Include a stack trace and a description of the conditions that triggered the crash if at all possible. For obvious reasons, it is quite important that triad crash as little as possible. "Hacking"/Modding notes: I will be honest, the code is a bit of a mess. It is a short mess, probably less than 2 KLOC, but the amount of pointer arithmetic and number of globals used is not for the faint of heart. That said, feel free to "hack" in features! The license is just MIT, so do whatever. Feel free to contact me if you have any questions about how the code works or think you have a cool feature that should be merged into the codebase. I tried to document the source, but I'm sure certain lines will leave many programmers confused and/or horrified. My email is just firstname.lastname@example.org
OPCFW_CODE
For example, an employee may choose a new technology that hasn’t been road tested enough in the wild, and later that technology falls apart under heavy production load. Another example is someone writing code for a particular function, without knowing that code already exists in a shared library written by another team— reinventing the wheel and making maintenance and updates more challenging in the future. On larger teams, one of the common places these knowledge gaps exist is between teams or across disciplines: for example, when someone in operations creates a Band-Aid in one area of the system (like repetitively restarting a service to fix a memory leak), because the underlying issue is just too complex to diagnose and fix (the person doesn’t have enough understanding of the running code to fix the leaky resources). Everyday, people are making decisions with imperfect knowledge. The real question is, how can you improve the knowledge gaps and leverage your team to make better decisions? Here are a few strategies that can help your team work better, and in turn help you create better software. While none of these strategies is a new idea, they are all great reminders of ways to make your teams and processes that Define how you will work together. Whether you are creating an API or consuming someone else’s data, having a clearly defined contract is the first step toward a good working relationship. When you work with another service it is important to understand the guardrails and best practices for consuming that service. For example, you should establish the payload maximums and discuss the frequency and usage guidelines. If for some reason the existing API doesn’t meet your needs, then instead of just working around it, talk about why it isn’t working and collaboratively figure out the best way to solve the problem (whether it would be updating the API or leveraging a caching strategy). The key here is communication. Decide how you will test the whole system. One of the most important strategies is to think about how you will truly test the end-to-end functionality of a system. Having tests that investigate only your parts of the system (like the back-end APIs) but not the end-customer experience can result in uncaught errors or issues (such as my opening example of caching). The challenge then becomes, who will own these tests? And who will run these tests and be responsible for handling failures? You may not want tests for every scenario, but certainly the most important ones are worth having. When bugs happen, work together to solve them. When problems arise, try to avoid solutions that only mask the underlying issue. Instead, work together to figure out what the real cause of the problem is, and then make a decision as a team on the best way of addressing it going forward. This way the entire team can learn more about how the systems work, and everyone involved will be informed of any potential Band-Aids. Use versioning. When another team consumes something you created (an API, a library, a package), versioning is the smartest way of making updates and keeping everyone on the same page with those changes. There is nothing worse than relying on something and having it change underneath you. The author may think the changes are minor or innocuous, but sometimes those changes can have unintended consequences upstream. By starting with versions, it is easy to keep every- one in check and predictably manage Create coding standards. Following standards can be really helpful when it comes to code maintenance. When you depend on someone else and have access to that source code, being able to look at it—and know what you are looking at—can give you an edge in understanding, debugging, and integration. Similarly, in situations where styles are inherited and reused throughout the code, having tools like a style guide can help ensure that the user interfaces look consistent—even when different teams throughout the company develop them. Do code reviews. One of the best ways of bridging knowledge gaps on a team is to encourage sharing among team members. When other members review and give feedback, they learn the code, too. This is a great way of spreading knowledge across the team. Of course, the real key to great software architecture for a system developed by lots of different people is to have great communication. You want everyone to talk openly to everyone else, ask questions, and share ideas. This means creating a culture where people are open and have a sense of ownership—even for parts of the system they didn’t write. Hitchhiker’s Guide to Biomorphic Software Kenneth N. Lodding Ground Control to Architect Tom... Kate Matsudaira ( katemats.com) is the founder of her own company, Popforms. Previously she worked in engineering leadership roles at companies like Decide (acquired by eBay), Moz, Microsoft, and Amazon. © 2016 ACM 0001-0782/16/09 $15.00
OPCFW_CODE