url
stringlengths
11
2.25k
text
stringlengths
88
50k
ts
timestamp[s]date
2026-01-13 08:47:33
2026-01-13 09:30:40
https://www.timeforkids.com/k1/topics/culture/
TIME for Kids | Culture | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Culture World Speaking with Signs October 8, 2025 Some people speak with their mouth. Others speak with their hands. That is known as sign language. It is a way to communicate. People everywhere use sign language. Hand Talk Some people do not have the sense of hearing. They… Audio Spanish World Spell It Out October 8, 2025 Sign language helps people communicate. The United States uses American Sign Language. Its alphabet is below. Use the letters to figure out this message. Audio World Rice for Everyone December 13, 2024 People all over the world eat rice. Countries have their own rice dishes. Here are five dishes from different parts of the world. Which would you like to try? Risotto is from Italy. Broth is slowly added to rice as… Audio Spanish World Setting the Table December 13, 2024 People use different tools to eat with. Here are a few of them. Which do you use most often? Chopsticks (above) work together to grab food. They were developed in China about 5,000 years ago. These are being used to… Audio World What Is Culture? December 6, 2024 A culture is what a group of people have in common. They might speak the same language and eat the same food. They might have the same beliefs. Here are some of the things that make up a culture. Food… Audio Spanish World Say Hello! December 6, 2024 Language is a part of culture. Here are some common languages. Learn a way to greet someone in each language. Audio World Happy Birthday! December 14, 2022 Kids everywhere celebrate birthdays. But not everyone celebrates the same way. Here are five traditions. Piñatas are popular in Mexico. A piñata is filled with candy. Kids take turns hitting it with a stick. The piñata breaks… Audio Spanish World Spring Festival February 11, 2022 Spring brings warmth and celebration. Here is how people celebrate spring around the world. Spain Las Fallas is a festival of fire. People build giant floats. They parade them through the streets. Then the floats are burned in bonfires. India… Audio Community Get Active! February 11, 2022 Some people work to protect the environment. Others want to make the world more fair. What is important to you? Here are some ways to make a difference. Organize a Group In a group, people can share ideas. They work… Audio Spanish Arts Express Yourself February 11, 2022 Art is a way to express yourself. Artists use different materials. Learn about some artists and the tools they use. Some artists use paint. Painters use brushes and paint to make pictures. Some paintings are of people or places or… Audio Spanish Posts pagination 1 2 3 4 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/Option_8cpp.html#a04665169063c8ca1f2ea96c27fc7c2b2
LLVM: lib/Option/Option.cpp File Reference LLVM  22.0.0git lib Option Macros Option.cpp File Reference #include " llvm/Option/Option.h " #include " llvm/ADT/StringRef.h " #include " llvm/ADT/Twine.h " #include "llvm/Config/llvm-config.h" #include " llvm/Option/Arg.h " #include " llvm/Option/ArgList.h " #include " llvm/Option/OptTable.h " #include " llvm/Support/Compiler.h " #include " llvm/Support/Debug.h " #include " llvm/Support/ErrorHandling.h " #include " llvm/Support/raw_ostream.h " #include <cassert> Go to the source code of this file. Macros #define  P ( N ) Macro Definition Documentation ◆  P #define P ( N ) Value: case N : O << # N ; break N #define N Examples /work/as-worker-4/publish-doxygen-docs/llvm-project/llvm/include/llvm/Transforms/Utils/Local.h . Referenced by llvm::DebugifyCustomPassManager::add() , llvm::legacy::FunctionPassManager::add() , llvm::legacy::FunctionPassManagerImpl::add() , llvm::legacy::PassManager::add() , llvm::legacy::PassManagerBase::add() , llvm::legacy::PassManagerImpl::add() , llvm::PMDataManager::add() , llvm::StringTableBuilder::add() , slpvectorizer::BoUpSLP::ShuffleCostEstimator::add() , slpvectorizer::BoUpSLP::ShuffleCostEstimator::add() , addAllTypesFromDWP() , addAllTypesFromTypesSection() , llvm::AnalysisResolver::addAnalysisImplsPair() , addCalls() , llvm::vfs::InMemoryFileSystem::addFile() , llvm::vfs::InMemoryFileSystem::addFileNoOwn() , llvm::GlobalsAAResult::FunctionInfo::addFunctionInfo() , llvm::PMTopLevelManager::addImmutablePass() , llvm::PressureDiffs::addInstruction() , llvm::AMDGPUPassConfig::addIRPasses() , llvm::PMDataManager::addLowerLevelRequiredPass() , llvm::VPlanTransforms::addMinimumVectorEpilogueIterationCheck() , llvm::GlobalsAAResult::FunctionInfo::addModRefInfoForGlobal() , llvm::PassManager< LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & >::addPass() , llvm::TargetPassConfig::addPass() , llvm::TargetPassConfig::addPass() , llvm::ScheduleDAGInstrs::addPhysRegDeps() , llvm::orc::LinkGraphLinkingLayer::addPlugin() , llvm::SIScheduleBlock::addPred() , llvm::SUnit::addPred() , llvm::MachObjectWriter::addRelocation() , llvm::SIScheduleBlock::addSucc() , llvm::adjacent_find() , llvm::RegisterOperands::adjustLaneLiveness() , llvm::all_of() , llvm::detail::all_of_zip_predicate_first() , llvm::ms_demangle::ArenaAllocator::alloc() , llvm::ms_demangle::ArenaAllocator::allocArray() , llvm::ms_demangle::ArenaAllocator::allocUnalignedBuffer() , allPredecessorsComeFromSameSource() , allUsesOfLoadAndStores() , allUsesOfLoadedValueWillTrapIfNull() , llvm::AMDGPUExternalAAWrapper::AMDGPUExternalAAWrapper() , llvm::AnalysisGetter::AnalysisGetter() , llvm::AnalysisResolver::AnalysisResolver() , llvm::any_of() , llvm::DWARFTypePrinter< DieType >::appendScopes() , llvm::DWARFTypePrinter< DieType >::appendSubroutineNameAfter() , llvm::LiveRegSet::appendTo() , llvm::jitlink::ppc64::applyFixup() , llvm::jitlink::systemz::applyFixup() , llvm::DiagnosticInfoOptimizationBase::Argument::Argument() , llvm::AssertingVH< const BasicBlock >::AssertingVH() , llvm::RISCVFrameLowering::assignCalleeSavedSpillSlots() , llvm::CallGraphSCCPass::assignPassManager() , llvm::LoopPass::assignPassManager() , llvm::SelectionDAG::AssignTopologicalOrder() , llvm::DwarfCFIException::beginBasicBlockSection() , llvm::BitTracker::BitRef::BitRef() , llvm::SwitchCG::BitTestBlock::BitTestBlock() , llvm::xray::BlockPrinter::BlockPrinter() , llvm::AggressiveAntiDepBreaker::BreakAntiDependencies() , llvm::CriticalAntiDepBreaker::BreakAntiDependencies() , llvm::rdf::DataFlowGraph::build() , CalcNodeSethiUllmanNumber() , llvm::ModuleSummaryIndex::calculateCallGraphRoot() , slpvectorizer::BoUpSLP::calculateTreeCostAndTrimNonProfitable() , llvm::CallbackVH::CallbackVH() , llvm::CallbackVH::CallbackVH() , llvm::AAResults::callCapturesBefore() , llvm::CallGraph::CallGraph() , llvm::orc::callViaEPC() , llvm::AAResults::canBasicBlockModify() , canLoopBeDeleted() , llvm::canonicalizePath() , slpvectorizer::BoUpSLP::canVectorizeLoads() , llvm::DOTGraphTraits< CallGraphDOTInfo * >::CGGetValuePtr() , llvm::GraphTraits< CallGraph * >::CGGetValuePtr() , llvm::GraphTraits< CallGraphDOTInfo * >::CGGetValuePtr() , llvm::GraphTraits< const CallGraph * >::CGGetValuePtr() , llvm::GraphTraits< CallGraphNode * >::CGNGetValue() , llvm::GraphTraits< const CallGraphNode * >::CGNGetValue() , llvm::AArch64GISelUtils::changeFCMPPredToAArch64CC() , changeICMPPredToAArch64CC() , llvm::AArch64GISelUtils::changeVectorFCMPPredToAArch64CC() , charTailAt() , llvm::MCAsmParser::check() , llvm::MCAsmParser::check() , llvm::MCAsmParserExtension::check() , llvm::MCAsmParserExtension::check() , llvm::RuntimeDyldCheckerImpl::check() , checkDyldCommand() , checkDylibCommand() , checkRpathCommand() , checkSubCommand() , checkVectorTypesForPromotion() , llvm::remarks::BitstreamRemarkParser::classof() , llvm::remarks::YAMLRemarkParser::classof() , llvm::SCEVComparePredicate::classof() , llvm::SCEVUnionPredicate::classof() , llvm::SCEVWrapPredicate::classof() , llvm::rdf::DataFlowGraph::DefStack::clear_block() , llvm::vfs::File::close() , clusterSortPtrAccesses() , collectBitParts() , collectEHScopeMembers() , llvm::collectGlobalObjectNameStrings() , llvm::PMTopLevelManager::collectLastUses() , collectLeaves() , collectReleaseInsertPts() , llvm::PMDataManager::collectRequiredAndUsedAnalyses() , collectVirtualRegUses() , llvm::MMRAMetadata::combine() , combineSIntToFP() , combineUIntToFP() , llvm::HvxSelector::completeToPerfect() , llvm::compression::compress() , llvm::object::CompressedOffloadBundle::compress() , llvm::LoopVectorizationPlanner::computeBestVF() , llvm::EHStreamer::computeCallSiteTable() , llvm::ScheduleDAGMILive::computeCyclicCriticalPath() , llvm::HexagonBlockRanges::computeDeadMap() , llvm::DivergencePropagator< ContextT >::computeJoinPoints() , computeKnownBitsFromOperator() , computeKnownFPClass() , ComputeLiveInBlocks() , llvm::rdf::Liveness::computeLiveIns() , llvm::EHStreamer::computePadMap() , llvm::rdf::Liveness::computePhiInfo() , llvm::object::computeSymbolSizes() , computeUnlikelySuccessors() , llvm::JumpThreadingPass::computeValueKnownInPredecessorsImpl() , computeVTableFuncs() , llvm::RawInstrProf::ProfileData< IntPtrT >::ConstantInt::get() , llvm::DwarfUnit::constructContainingTypeDIEs() , Contains() , convertToSinitPriority() , llvm::copy_if() , llvm::count_if() , llvm::orc::COFFPlatform::Create() , llvm::orc::ELFNixPlatform::Create() , llvm::orc::MachOPlatform::Create() , llvm::orc::SymbolTableDumpPlugin::Create() , llvm::sampleprof::SampleProfileReader::create() , llvm::sampleprof::SampleProfileReader::create() , llvm::sandboxir::CmpInst::create() , llvm::symbolize::SymbolizableObjectFile::create() , llvm::vfs::RedirectingFileSystem::create() , llvm::sys::fs::create_directories() , createAndCheckVectorTypesForPromotion() , llvm::IRBuilderBase::CreateConstrainedFPCmp() , llvm::IRBuilderBase::CreateFCmp() , llvm::IRBuilderBase::CreateFCmpFMF() , llvm::IRBuilderBase::CreateFCmpS() , llvm::orc::createHeaderBlock() , llvm::IRBuilderBase::CreateICmp() , llvm::createInterleavedLoadCombinePass() , llvm::EpilogueVectorizerMainLoop::createIterationCountCheck() , llvm::createMIRAddFSDiscriminatorsPass() , llvm::createMIRProfileLoaderPass() , llvm::createPrintModulePass() , llvm::OpenMPIRBuilder::createReductions() , llvm::sys::fs::createTemporaryFile() , llvm::sandboxir::CmpInst::createWithCopiedFlags() , CriticalPathStep() , llvm::xray::CustomEventRecordV5::CustomEventRecordV5() , llvm::codeview::CVRecord< TypeLeafKind >::CVRecord() , llvm::FileCheckString::DagNotPrefixInfo::DagNotPrefixInfo() , llvm::GraphTraits< const DDGNode * >::DDGGetTargetNode() , llvm::GraphTraits< DDGNode * >::DDGGetTargetNode() , DecodeAddrMode2IdxInstruction() , DecodeAddrMode3Instruction() , DecodeT2LDRDPreInstruction() , DecodeT2STRDPreInstruction() , decodeUImmOperand() , llvm::orc::JITDylib::define() , llvm::orc::JITDylib::define() , DEFINE_SIMPLE_CONVERSION_FUNCTIONS() , llvm::deleteDeadLoop() , deleteLoopIfDead() , llvm::DemotePHIToStack() , llvm::DependenceInfo::depends() , llvm::orc::deregisterFrameWrapper() , llvm::orc::shared::SPSSerializationTraits< SPSTuple< SPSTagT1, SPSTagT2 >, std::pair< T1, T2 > >::deserialize() , llvm::GCNIterativeScheduler::detachSchedule() , llvm::DFCalculateWorkObject< BlockT >::DFCalculateWorkObject() , llvm::codeview::discoverTypeIndices() , llvm::codeview::discoverTypeIndicesInSymbol() , llvm::MCAsmMacro::dump() , llvm::RegisterPressure::dump() , LiveDebugValues::InstrRefBasedLDV::dump_mloc_transfer() , llvm::PMTopLevelManager::dumpArguments() , llvm::PMDataManager::dumpLastUses() , llvm::objcopy::dxbc::dumpPartToFile() , llvm::PMDataManager::dumpPassArguments() , llvm::PMDataManager::dumpPassInfo() , llvm::LPPassManager::dumpPassStructure() , llvm::RGPassManager::dumpPassStructure() , llvm::PMDataManager::dumpPreservedSet() , llvm::PMDataManager::dumpRequiredSet() , llvm::PMDataManager::dumpUsedSet() , llvm::ehAwareSplitEdge() , llvm::MCELFStreamer::emitCommonSymbol() , llvm::PMDataManager::emitInstrCountChangedRemark() , llvm::OpenMPIRBuilder::emitNonContiguousDescriptor() , llvm::OpenMPIRBuilder::emitOffloadingArrays() , emitRangeList() , llvm::json::Array::emplace() , llvm::CodeViewDebug::endModule() , llvm::DwarfDebug::endModule() , llvm::equal() , llvm::IntervalMap< KeyT, ValT, N, Traits >::iterator::erase() , llvm::json::Array::erase() , llvm::erase_if() , llvm::PriorityWorklist< T, VectorT, MapT >::erase_if() , llvm::GlobalsAAResult::FunctionInfo::eraseModRefInfoForGlobal() , llvm::ScalarEvolution::ExitLimit::ExitLimit() , llvm::xray::Profile::expandPath() , expandPseudoVFMK() , extractIntPart() , llvm::objcopy::dxbc::extractPartAsObject() , llvm::FileCheckString::FileCheckString() , llvm::X86::fillValidCPUArchList() , llvm::X86::fillValidTuneCPUList() , llvm::DebugLocEntry::finalize() , llvm::StableFunctionMap::finalize() , llvm::pdb::GSIHashStreamBuilder::finalizeBuckets() , llvm::find_if() , llvm::find_if_not() , llvm::find_singleton() , llvm::find_singleton_nested() , llvm::PMTopLevelManager::findAnalysisPass() , llvm::PMTopLevelManager::findAnalysisUsage() , findArgParts() , llvm::AnalysisResolver::findImplPass() , findIrreducibleHeaders() , llvm::MachineLoopInfo::findLoopPreheader() , slpvectorizer::BoUpSLP::findReusedOrderedScalars() , findUserOf() , llvm::yaml::CustomMappingTraits< GlobalValueSummaryMapTy >::fixAliaseeLinks() , fixIrreducible() , llvm::InnerLoopVectorizer::fixNonInductionPHIs() , llvm::PeelingModuloScheduleExpander::fixupBranches() , llvm::ConstantFolder::FoldCmp() , llvm::InstSimplifyFolder::FoldCmp() , llvm::IRBuilderFolder::FoldCmp() , llvm::NoFolder::FoldCmp() , llvm::TargetFolder::FoldCmp() , foldFabsWithFcmpZero() , llvm::InstCombinerImpl::foldFCmpIntToFPConst() , llvm::InstCombinerImpl::foldICmpOrConstant() , llvm::InstCombinerImpl::foldSelectOpOp() , foldSqrtWithFcmpZero() , llvm::PMDataManager::freePass() , llvm::RegBankSelect::InsertPoint::frequency() , llvm::RegBankSelect::InstrInsertPoint::frequency() , llvm::RegBankSelect::MBBInsertPoint::frequency() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::json::fromJSON() , llvm::JITEvaluatedSymbol::fromPointer() , gatherPossiblyVectorizableLoads() , generateReproducer() , llvm::PointerSumType< TagT, MemberTs... >::get() , llvm::SignedDivisionByConstantInfo::get() , llvm::UnsignedDivisionByConstantInfo::get() , getAllocationDataForFunction() , llvm::AnalysisGetter::getAnalysis() , llvm::PredicatedScalarEvolution::getAsAddRec() , llvm::PMDataManager::getAsPass() , llvm::dwarf_linker::parallel::DependencyTracker::LiveRootWorklistItemTy::CompileUnitPointerTraits::getAsVoidPointer() , llvm::FunctionPointerLikeTypeTraits< Alignment, FunctionPointerT >::getAsVoidPointer() , llvm::pointer_union_detail::PointerUnionUIntTraits< PTs >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< const T * >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< const T >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< PointerEmbeddedInt< IntT, Bits > >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< PointerIntPair< PointerTy, IntBits, IntType, PtrTraits > >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< PointerUnion< PTs... > >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< uintptr_t >::getAsVoidPointer() , llvm::PointerLikeTypeTraits< void * >::getAsVoidPointer() , llvm::PredicatedScalarEvolution::getBackedgeTakenCount() , llvm::getBestSimplifyQuery() , llvm::BPIPassTrait< PassT >::getBPI() , llvm::BPIPassTrait< LazyBranchProbabilityInfoPass >::getBPI() , GraphTraits< const CallsiteContextGraph< DerivedCCG, FuncTy, CallTy > * >::GetCallee() , getCN() , llvm::omp::getCompoundConstruct() , getConstantEvolvingPHIOperands() , llvm::RegsForValue::getCopyFromRegs() , llvm::X86::getCPUDispatchMangling() , llvm::object::MachOObjectFile::getDice() , getEnclosingLoopRegionForRegion() , llvm::GraphTraits< ModuleSummaryIndex * >::getEntryNode() , llvm::objcarc::getEquivalentPHIs() , llvm::cas::ondisk::OnDiskGraphDB::getExistingReference() , llvm::X86::getFeaturesForCPU() , llvm::getFirstValueProfRecord() , getFreeFunctionDataForFunction() , llvm::PointerIntPair< PointerTy, IntBits, IntType >::getFromOpaqueValue() , llvm::AMDGPULibFuncBase::Param::getFromTy() , llvm::dwarf_linker::parallel::DependencyTracker::LiveRootWorklistItemTy::CompileUnitPointerTraits::getFromVoidPointer() , llvm::FunctionPointerLikeTypeTraits< Alignment, FunctionPointerT >::getFromVoidPointer() , llvm::pointer_union_detail::PointerUnionUIntTraits< PTs >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< const T * >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< const T >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< PointerEmbeddedInt< IntT, Bits > >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< PointerEmbeddedInt< IntT, Bits > >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< PointerIntPair< PointerTy, IntBits, IntType, PtrTraits > >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< PointerIntPair< PointerTy, IntBits, IntType, PtrTraits > >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< PointerUnion< PTs... > >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< ReachingDef >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< ReachingDef >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< uintptr_t >::getFromVoidPointer() , llvm::PointerLikeTypeTraits< void * >::getFromVoidPointer() , llvm::getFSPassBitBegin() , llvm::getFSPassBitEnd() , llvm::AMDGPUMangledLibFunc::getFunctionType() , getImplicitCondFromMI() , llvm::ir2vec::Vocabulary::getIndex() , GetInductionVariable() , llvm::object::DirectX::PSVRuntimeInfo::getInfoAs() , llvm::object::DirectX::PSVRuntimeInfo::getInputVectorCount() , getIntrinsicParamType() , llvm::X86::getKeyFeature() , llvm::getLegalVectorType() , llvm::object::MachOObjectFile::getLibraryShortNameByIndex() , getLocalId() , llvm::ScalarEvolution::getLoopInvariantPredicate() , llvm::VPIRMetadata::getMetadata() , llvm::AAResults::getModRefInfo() , llvm::GlobalsAAResult::FunctionInfo::getModRefInfoForGlobal() , llvm::AAResults::getModRefInfoMask() , llvm::rdf::RefNode::getNextRef() , llvm::ThreadSafeTrieRawHashMapBase::getNextTrie() , GraphTraits< const CallsiteContextGraph< DerivedCCG, FuncTy, CallTy > * >::getNode() , llvm::PartialOrderingVisitor::GetNodeRank() , llvm::ThreadSafeTrieRawHashMapBase::getNumBits() , llvm::PluginLoader::getNumPlugins() , getNumPushPopRegs() , llvm::ThreadSafeTrieRawHashMapBase::getNumSlotUsed() , llvm::PMDataManager::getOnTheFlyPass() , llvm::object::DirectX::PSVRuntimeInfo::getOutputVectorCounts() , llvm::getPassTimer() , llvm::object::DirectX::PSVRuntimeInfo::getPatchConstOrPrimVectorCount() , llvm::internal::NfaTranscriber::getPaths() , llvm::HvxSelector::getPerfectCompletions() , getPermuteNode() , llvm::PluginLoader::getPlugin() , llvm::ExecutionEngine::getPointerToGlobal() , llvm::SITargetLowering::getPrefLoopAlignment() , getPropertyName() , llvm::vfs::RedirectingFileSystem::getRealPath() , llvm::rdf::Liveness::getRealUses() , getRecurrences() , getReductionInstr() , llvm::LiveIntervals::getRegMaskBitsInBlock() , llvm::LiveIntervals::getRegMaskSlotsInBlock() , llvm::object::MachOObjectFile::getRelocation() , getRoot() , llvm::X86TTIImpl::getShuffleCost() , llvm::object::DirectX::PSVRuntimeInfo::getSigInputCount() , llvm::object::DirectX::PSVRuntimeInfo::getSigOutputCount() , llvm::object::DirectX::PSVRuntimeInfo::getSigPatchOrPrimCount() , llvm::PredicatedScalarEvolution::getSmallConstantMaxTripCount() , getStableFunctionEntries() , llvm::ThreadSafeTrieRawHashMapBase::getStartBit() , getStruct() , getStructOrErr() , llvm::MachineBasicBlock::getSuccProbability() , llvm::CmpPredicate::getSwapped() , llvm::object::MachOObjectFile::getSymbol64TableEntry() , llvm::PredicatedScalarEvolution::getSymbolicMaxBackedgeTakenCount() , llvm::object::MachOObjectFile::getSymbolTableEntry() , getSymbolTableEntryBase() , slpvectorizer::BoUpSLP::getTreeCost() , llvm::ThreadSafeTrieRawHashMapBase::getTriePrefixAsString() , llvm::getUnderlyingObjectAggressive() , llvm::getUnderlyingObjects() , llvm::WasmEHFuncInfo::getUnwindSrcs() , llvm::WasmEHFuncInfo::getUnwindSrcs() , getV_CMPOpcode() , llvm::PBQP::ValuePool< VectorT >::getValue() , llvm::FunctionLoweringInfo::getValueFromVirtualReg() , llvm::getValueProfRecordNext() , llvm::getValueProfRecordNumValueData() , llvm::getValueProfRecordValueData() , llvm::SystemZTTIImpl::getVectorTruncCost() , llvm::getVisibleToRegularObjVtableGUIDs() , llvm::getVRegSubRegDef() , llvm::vfs::File::getWithPath() , llvm::object::GOFFObjectFile::GOFFObjectFile() , llvm::objcopy::dxbc::handleArgs() , llvm::Error::handleErrors , hasAllNBitUsers() , hasAllNBitUsers() , llvm::MachineFunctionProperties::hasProperty() , llvm::orc::LibraryScanHelper::hasSeenOrMark() , llvm::MMRAMetadata::hasTagWithPrefix() , llvm::HexagonMCELFStreamer::HexagonMCEmitCommonSymbol() , hoistMinMax() , llvm::rdf::DataFlowGraph::id() , llvm::rdf::NodeAllocator::id() , llvm::BumpPtrAllocatorImpl< MallocAllocator, 65536 >::identifyObject() , llvm::PassNameParser::ignorablePass() , llvm::PassNameParser::ignorablePassImpl() , llvm::MachineInstr::ilist_callback_traits< MachineBasicBlock > , llvm::FunctionVarLocs::init() , INITIALIZE_PASS() , INITIALIZE_PASS() , llvm::PMDataManager::initializeAnalysisImpl() , llvm::yaml::CustomMappingTraits< std::map< std::vector< uint64_t >, WholeProgramDevirtResolution::ByArg > >::inputOne() , llvm::codeview::DebugStringTableSubsection::insert() , llvm::IntervalMap< KeyT, ValT, N, Traits >::iterator::insert() , llvm::json::Array::insert() , llvm::json::Array::insert() , llvm::json::Array::insert() , llvm::InsertPreheaderForLoop() , insertSpills() , insertUniqueBackedgeBlock() , instCombineSVEVectorFuseMulAddSub() , llvm::HexagonShuffler::insts() , llvm::HexagonShuffler::insts() , AllocaSlices::SliceBuilder::InstVisitor< SliceBuilder > , llvm::xray::Profile::internPath() , isAlternateInstruction() , isAtLineEnd() , llvm::ScalarEvolution::isBasicBlockEntryGuardedByCond() , llvm::isBitcodeWriterPass() , isBlockInLCSSAForm() , llvm::ICmpInst::isCommutative() , llvm::MMRAMetadata::isCompatibleWith() , llvm::isDUPFirstSegmentMask() , llvm::isDUPQMask() , llvm::isEqual() , llvm::CmpInst::isEquality() , llvm::ICmpInst::isEquality() , llvm::CmpInst::isFPPredicate() , llvm::ICmpInst::isGE() , llvm::ICmpInst::isGT() , llvm::Instruction::isIdenticalToWhenDefined() , slpvectorizer::BoUpSLP::isIdentityOrder() , isImpliedCondICmps() , isIntegerWideningViable() , llvm::CmpInst::isIntPredicate() , llvm::isIRPrintingPass() , llvm::LiveRangeCalc::isJointlyDominated() , llvm::ScalarEvolution::isKnownMultipleOf() , llvm::ICmpInst::isLE() , isLoopDead() , llvm::ICmpInst::isLT() , isObjectSizeLessThanOrEq() , llvm::isOfRegClass() , llvm::orc::SymbolStringPtrBase::isRealPoolEntry() , isRegUsedByPhiNodes() , llvm::CmpInst::isRelational() , llvm::ICmpInst::isRelational() , IsStoredObjCPointer() , isVectorPromotionViable() , isVectorPromotionViableForSlice() , llvm::rdf::DataFlowGraph::DefStack::Iterator , llvm::IVStrideUse::IVStrideUse() , llvm::DenseMapBase< DenseMap, KeyT, ValueT, KeyInfoT, BucketT >::keys() , llvm::DenseMapBase< DenseMap, KeyT, ValueT, KeyInfoT, BucketT >::keys() , llvm::AnonStructTypeKeyInfo::KeyTy::KeyTy() , llvm::FunctionTypeKeyInfo::KeyTy::KeyTy() , llvm::IRMover::StructTypeKeyInfo::KeyTy::KeyTy() , llvm::LexicalScope::LexicalScope() , LLVMCreateFunctionPassManager() , LLVMCreateGenericValueOfPointer() , LLVMGetAlignment() , LLVMGetCmpXchgFailureOrdering() , LLVMGetCmpXchgSuccessOrdering() , LLVMGetExact() , LLVMGetFastMathFlags() , LLVMGetIsDisjoint() , LLVMGetMaskValue() , LLVMGetNNeg() , LLVMGetNSW() , LLVMGetNumMaskElements() , LLVMGetNUW() , LLVMGetOrdering() , LLVMSetAlignment() , LLVMSetCmpXchgFailureOrdering() , LLVMSetCmpXchgSuccessOrdering() , LLVMSetExact() , LLVMSetFastMathFlags() , LLVMSetIsDisjoint() , LLVMSetNNeg() , LLVMSetNSW() , LLVMSetNUW() , LLVMSetOrdering() , LLVMSetVolatile() , llvm::PassPlugin::Load() , llvm::pdb::const_iterator< SrcHeaderBlockEntry >::load() , loadFDRLog() , llvm::xray::loadProfile() , llvm::longestCommonSequence() , llvm::HexagonTargetLowering::LowerBUILD_VECTOR() , llvm::HexagonTargetLowering::LowerCONCAT_VECTORS() , llvm::NVPTXTargetLowering::LowerFormalArguments() , lowerImmediateIfPossible() , llvm::HexagonTargetLowering::LowerUnalignedLoad() , llvm::HexagonTargetLowering::LowerVECTOR_SHUFFLE() , llvm::MIPatternMatch::m_c_GFCmp() , llvm::MIPatternMatch::m_c_GICmp() , llvm::SDPatternMatch::m_Context() , llvm::SDPatternMatch::m_FixedVectorVT() , llvm::SDPatternMatch::m_FloatingPointVT() , llvm::MIPatternMatch::m_GFCmp() , llvm::MIPatternMatch::m_GICmp() , llvm::SDPatternMatch::m_IntegerVT() , llvm::SDPatternMatch::m_LegalOp() , llvm::SDPatternMatch::m_LegalType() , llvm::SDPatternMatch::m_NUses() , llvm::SDPatternMatch::m_OneUse() , llvm::MIPatternMatch::m_Pred() , llvm::SDPatternMatch::m_Result() , llvm::SDPatternMatch::m_ScalableVectorVT() , llvm::PatternMatch::m_SpecificInt_ICMP() , llvm::SDPatternMatch::m_SpecificScalarVT() , llvm::SDPatternMatch::m_SpecificVectorElementVT() , llvm::SDPatternMatch::m_SpecificVT() , llvm::SDPatternMatch::m_Unless() , llvm::SDPatternMatch::m_VectorVT() , makeImportedSymbolIterator() , llvm::DenseMapIterator< KeyT, ValueT, KeyInfoT, BucketT >::makeIterator() , mapFCmpPred() , llvm::yaml::MappingTraits< ArchYAML::Archive::Child >::mapping() , llvm::yaml::MappingTraits< DXContainerYAML::Part >::mapping() , llvm::yaml::MappingTraits< InstrProfCorrelator::Probe >::mapping() , mapToSinitPriority() , llvm::rdf::DataFlowGraph::markBlock() , llvm::PatternMatch::match() , llvm::PatternMatch::match() , llvm::SCEVPatternMatch::match() , llvm::SDPatternMatch::ReassociatableOpc_match< PatternTs >::match() , llvm::VPlanPatternMatch::match() , llvm::VPlanPatternMatch::match() , llvm::VPlanPatternMatch::match() , llvm::PatternMatch::match_fn() , llvm::VPlanPatternMatch::match_fn() , llvm::MCInstPrinter::matchAliasPatterns() , RecurrenceInfo::matchConditionalRecurrence() , matchDoublePermute() , matchDoublePermute() , matchIsNotNaN() , matchPERM() , matchPermute() , matchPermute() , llvm::matchSimpleBinaryIntrinsicRecurrence() , llvm::matchSimpleRecurrence() , llvm::matchSimpleRecurrence() , RecurrenceInfo::matchSimpleRecurrence() , matchUnorderedInfCompare() , llvm::JumpThreadingPass::maybethreadThroughTwoBasicBlocks() , llvm::MCAsmMacro::MCAsmMacro() , llvm::MCAsmMacro::MCAsmMacro() , llvm::rdf::CodeNode::members_if() , llvm::MemCpyOptPass::MemCpyOptPass() , mergeConditionalStores() , llvm::xray::mergeProfilesByStack() , llvm::xray::mergeProfilesByThread() , llvm::MFPropsModifier() , llvm::MIPatternMatch::mi_match() , llvm::MIPatternMatch::mi_match() , llvm::MIPatternMatch::mi_match() , llvm::MIRAddFSDiscriminators::MIRAddFSDiscriminators() , llvm::orc::LinkGraphLinkingLayer::JITLinkCtx::modifyPassConfig() , moveLCSSAPhis() , llvm::object::DiceRef::moveNext() , llvm::object::MachOBindEntry::moveNext() , llvm::PeelingModuloScheduleExpander::moveStageBetweenBlocks() , multipleIterations() , llvm::ShuffleBlockStrategy::mutate() , llvm::VPlanTransforms::narrowInterleaveGroups() , needToReserveScavengingSpillSlots() , node_eq() , llvm::none_of() , llvm::SDPatternMatch::Not() , llvm::orc::LinkGraphLinkingLayer::JITLinkCtx::notifyEmitted() , llvm::orc::LinkGraphLinkingLayer::JITLinkCtx::notifyFailed() , llvm::orc::LinkGraphLinkingLayer::JITLinkCtx::notifyMaterializing() , llvm::orc::shared::numDeallocActions() , llvm::NVPTXExternalAAWrapper::NVPTXExternalAAWrapper() , llvm::json::Object::Object() , llvm::object::DXContainerObjectFile::ObjectFile , llvm::CmpPredicate::operator!=() , llvm::CmpPredicate::operator!=() , llvm::TargetInstrInfo::RegSubRegPair::operator!=() , llvm::orc::ExecutorAddr::Tag::operator()() , llvm::orc::ExecutorAddr::Untag::operator()() , llvm::orc::ExecutorNativePlatform::operator()() , llvm::pair_hash< First, Second >::operator()() , llvm::unique_function< R(P...) const >::operator()() , llvm::unique_function< R(P...)>::operator()() , llvm::LazyAtomicPointer< TrieNode >::operator*() , llvm::object::symbol_iterator::operator*() , llvm::bfi_detail::BlockMass::operator*=() , llvm::MachineRegisterInfo::defusechain_instr_iterator< true, true, false, true >::operator++() , llvm::MachineRegisterInfo::defusechain_iterator< true, true, false, true, false >::operator++() , llvm::object::symbol_iterator::operator->() , llvm::DiagnosticPrinter::operator<<() , llvm::DiagnosticPrinterRawOStream::operator<<() , llvm::Error::operator<< , llvm::HexagonBlockRanges::PrintRangeMap::operator<< , llvm::operator<<() , llvm::operator<<() , llvm::operator<<() , llvm::operator<<() , llvm::raw_ostream::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::rdf::operator<<() , llvm::SelectionDAGBuilder::DanglingDebugInfo::Print::operator<< , llvm::PluginLoader::operator=() , llvm::ThreadSafeTrieRawHashMapBase::operator=() , llvm::xray::Profile::operator=() , llvm::CmpPredicate::operator==() , llvm::TargetInstrInfo::RegSubRegPair::operator==() , llvm::ir2vec::Vocabulary::operator[]() , AbstractManglingParser< Derived, Alloc >::OperatorInfo::OperatorInfo() , llvm::GCOV::Options::Options() , llvm::or32le() , or32le() , llvm::yaml::CustomMappingTraits< GlobalValueSummaryMapTy >::output() , llvm::yaml::CustomMappingTraits< std::map< std::vector< uint64_t >, WholeProgramDevirtResolution::ByArg > >::output() , llvm::yaml::CustomMappingTraits< std::map< uint64_t, WholeProgramDevirtResolution > >::output() , ParameterPack::ParameterPack() , llvm::json::parse() , llvm::X86::parseArchX86() , llvm::parseCachePruningPolicy() , llvm::AMDGPUMangledLibFunc::parseFuncName() , parseNamePrefix() , parseSegmentOrSectionName() , AbstractManglingParser< Derived, Alloc >::parseTemplateParamDecl() , AbstractManglingParser< Derived, Alloc >::parseType() , AbstractManglingParser< Derived, Alloc >::parseUnnamedTypeName() , llvm::partition() , llvm::partition_point() , llvm::PassNameParser::passEnumerate() , llvm::PassNameParser::passRegistered() , llvm::CGDataOStream::patch() , llvm::ProfOStream::patch() , llvm::peelLoop() , llvm::HexagonTargetLowering::PerformDAGCombine() , llvm::SSAUpdaterTraits< DebugSSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< DebugSSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< LDVSSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< LDVSSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< MachineSSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< MachineSSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< SSAUpdater >::PHI_iterator::PHI_iterator() , llvm::SSAUpdaterTraits< SSAUpdater >::PHI_iterator::PHI_iterator() , llvm::xray::PIDRecord::PIDRecord() , llvm::ScheduleDAGMI::placeDebugValues() , llvm::AAResults::pointsToConstantMemory() , llvm::BatchAAResults::pointsToConstantMemory() , llvm::Annotations::pointWithPayload() , llvm::PoisoningVH< ValueTy >::PoisoningVH() , llvm::PoisoningVH< ValueTy >::PoisoningVH() , llvm::rdf::DataFlowGraph::DefStack::pop() , llvm::LegalityPredicates::predNot() , preorderVisit() , llvm::PMDataManager::preserveHigherLevelAnalysis() , llvm::ConvergingVLIWScheduler::pressureChange() , llvm::cl::OptionDiffPrinter< ParserDT, ValDT >::print() , llvm::cl::OptionDiffPrinter< DT, DT >::print() , llvm::MMRAMetadata::print() , llvm::opt::Option::print() , llvm::BitTracker::print_cells() , printAsmMRegister() , Node::printAsOperand() , llvm::dwarf::printCFIProgram() , llvm::SIScheduleBlock::printDebug() , PrintLoadStoreResults() , PrintLoopInfo() , PrintModRefResults() , PrintModRefResults() , printOperand() , llvm::cl::printOptionDiff() , llvm::cl::printOptionDiff() , llvm::PassManager< LazyCallGraph::SCC, CGSCCAnalysisManager, LazyCallGraph &, CGSCCUpdateResult & >::printPipeline() , PrintResults() , llvm::RISCVISAInfo::printSupportedExtensions() , slpvectorizer::BoUpSLP::processBuildVector() , processPHI() , llvm::FoldingSetTrait< std::pair< T1, T2 > >::Profile() , llvm::xray::profileFromTrace() , profitImm() , llvm::ModuleSummaryIndex::propagateAttributes() , llvm::DXContainerYAML::PSVInfo::PSVInfo() , llvm::DXContainerYAML::PSVInfo::PSVInfo() , llvm::DXContainerYAML::PSVInfo::PSVInfo() , llvm::DXContainerYAML::PSVInfo::PSVInfo() , llvm::PTOGV() , llvm::support::endian::read() , llvm::support::endian::read16() , llvm::support::endian::read16() , llvm::support::endian::read16be() , llvm::support::endian::read16le() , llvm::support::endian::read32() , llvm::support::endian::read32() , llvm::support::endian::read32be() , llvm::support::endian::read32le() , llvm::support::endian::read64() , llvm::support::endian::read64() , llvm::support::endian::read64be() , llvm::support::endian::read64le() , llvm::readAndDecodeStrings() , llvm::orc::MemoryAccess::readBuffers() , llvm::FileCheck::readCheckFile() , llvm::orc::MemoryAccess::readPointers() , llvm::orc::MemoryAccess::readStrings() , llvm::orc::InProcessMemoryAccess::readStringsAsync() , llvm::orc::MemoryAccess::readUInt16s() , llvm::orc::MemoryAccess::readUInt32s() , llvm::orc::MemoryAccess::readUInt64s() , llvm::orc::MemoryAccess::readUInt8s() , llvm::WebAssemblyExceptionInfo::recalculate() , llvm::objcopy::dxbc::Object::recomputeHeader() , llvm::PMDataManager::recordAvailableAnalysis() , TransferTracker::redefVar() , llvm::DebugifyEachInstrumentation::registerCallbacks() , llvm::DroppedVariableStatsIR::registerCallbacks() , llvm::OptNoneInstrumentation::registerCallbacks() , llvm::PreservedCFGCheckerInstrumentation::registerCallbacks() , llvm::PrintIRInstrumentation::registerCallbacks() , llvm::PseudoProbeVerifier::registerCallbacks() , llvm::TimePassesHandler::registerCallbacks() , llvm::TimeProfilingPassesHandler::registerCallbacks() , llvm::VerifyInstrumentation::registerCallbacks() , llvm::registerCodeGenCallback() , llvm::RuntimeDyldMachOCRTPBase< Impl >::registerEHFrames() , llvm::orc::registerFrameWrapper() , llvm::mustache::Template::registerPartial() , llvm::ChangeReporter< IRUnitT >::registerRequiredCallbacks() , llvm::orc::InProcessMemoryMapper::release() , llvm::rdf::DataFlowGraph::releaseBlock() , llvm::detail::IEEEFloat::remainder() , llvm::MCContext::RemapDebugPaths() , rematerializeLiveValuesAtUses() , llvm::NodeSet::remove_if() , llvm::remove_if() , llvm::SetVector< EdgeType * >::remove_if() , llvm::SmallPtrSetImpl< MachineInstr * >::remove_if() , llvm::PMDataManager::removeDeadPasses() , llvm::MemoryDependenceResults::removeInstruction() , llvm::PMDataManager::removeNotPreservedAnalysis() , llvm::orc::LinkGraphLinkingLayer::removePlugin() , llvm::ScalarEvolution::removePointerBase() , llvm::SUnit::removePred() , slpvectorizer::BoUpSLP::reorderBottomToTop() , llvm::replace_copy_if() , llvm::MachO::replace_extension() , replaceAllPrepares() , replaceConstantExprOp() , llvm::HexagonTargetLowering::ReplaceNodeResults() , llvm::json::Path::report() , llvm::reportMismatch() , llvm::RAGreedy::RequiredAnalyses::RequiredAnalyses() , llvm::RAGreedy::RequiredAnalyses::RequiredAnalyses() , llvm::GCNUpwardRPTracker::reset() , llvm::GCNUpwardRPTracker::reset() , llvm::MachineFunctionProperties::reset() , llvm::orc::LibraryScanHelper::resolve() , rewriteNonInstructionUses() , RewritePhi::RewritePhi() , llvm::object::DirectX::RootParameterView::RootParameterView() , llvm::DevirtSCCRepeatedPass::run() , llvm::GlobalMergePass::run() , llvm::InlinerPass::run() , llvm::ModuleInlinerPass::run() , llvm::PassManager< IRUnitT, AnalysisManagerT, ExtraArgTs >::run() , llvm::WasmEHPreparePass::run() , llvm::DroppedVariableStatsIR::runAfterPass() , llvm::DroppedVariableStatsIR::runBeforePass() , llvm::orc::LocalCXXRuntimeOverridesBase::runDestructors() , llvm::LPPassManager::runOnFunction() , llvm::RGPassManager::runOnFunction() , llvm::MachineFunction::salvageCopySSAImpl() , llvm::StringSaver::save() , llvm::RegScavenger::scavengeRegisterBackwards() , llvm::SCEVUnionPredicate::SCEVUnionPredicate() , llvm::PMTopLevelManager::schedulePass() , llvm::SDPatternMatch::sd_context_match() , llvm::SDPatternMatch::sd_context_match() , llvm::SDPatternMatch::sd_match() , llvm::SDPatternMatch::sd_match() , llvm::SDPatternMatch::sd_match() , llvm::SDPatternMatch::sd_match() , llvm::search() , llvm::RISCVDAGToDAGISel::Select() , selectPartitionType() , llvm::RISCVDAGToDAGISel::selectVLSEG() , llvm::RISCVDAGToDAGISel::selectVLSEGFF() , llvm::RISCVDAGToDAGISel::selectVLXSEG() , llvm::RISCVDAGToDAGISel::selectVSSEG() , llvm::RISCVDAGToDAGISel::selectVSXSEG() , separateNestedLoop() , llvm::orc::shared::SPSSerializationTraits< SPSTuple< SPSTagT1, SPSTagT2 >, std::pair< T1, T2 > >::serialize() , llvm::FunctionLoweringInfo::set() , llvm::MachineFunctionProperties::set() , llvm::MipsABIFlagsSection::setAllFromPredicates() , llvm::MipsABIFlagsSection::setASESetFromPredicates() , llvm::setBranchProbability() , llvm::MipsABIFlagsSection::setCPR1SizeFromPredicates() , llvm::vfs::InMemoryFileSystem::setCurrentWorkingDirectory() , llvm::sampleprof::SampleProfileReader::setDiscriminatorMaskedBitFrom() , llvm::MipsABIFlagsSection::setFpAbiFromPredicates() , llvm::MipsABIFlagsSection::setGPRSizeFromPredicates() , llvm::MipsABIFlagsSection::setISAExtensionFromPredicates() , llvm::MipsABIFlagsSection::setISALevelAndRevisionFromPredicates() , llvm::PMTopLevelManager::setLastUser() , llvm::setLoopProbability() , llvm::VPIRMetadata::setMetadata() , llvm::VPBlockBase::setParent() , llvm::sframe::FDEInfo< E >::setPAuthKey() , llvm::CmpInst::setPredicate() , llvm::ScopedPrinter::setPrefix() , llvm::LineEditor::setPrompt() , llvm::msf::MSFBuilder::setStreamSize() , llvm::MCAsmParser::setTargetParser() , llvm::MIRParserImpl::setupRegisterInfo() , llvm::CallbackVH::setValPtr() , llvm::TrackingVH< ValueTy >::setValPtr() , llvm::MachO::shouldSkipSymLink() , shouldSplitOnPredicatedArgument() , simplifyCommonValuePhi() , simplifyGEPInst() , simplifyICmpWithMinMax() , simplifyOneLoop() , llvm::JumpThreadingPass::simplifyPartiallyRedundantLoad() , sinkMinMaxInBB() , llvm::orc::shared::SPSSerializationTraits< SPSTuple< SPSTagT1, SPSTagT2 >, std::pair< T1, T2 > >::size() , skipIfAtLineEnd() , llvm::MachineBasicBlock::SplitCriticalEdge() , llvm::MachineBasicBlock::SplitCriticalEdge() , llvm::MachineBasicBlock::SplitCriticalEdge() , llvm::SplitKnownCriticalEdge() , llvm::SrcOp::SrcOp() , llvm::cas::ondisk::OnDiskGraphDB::store() , llvm::BitTracker::subst() , llvm::SwingSchedulerDAG::SwingSchedulerDAG() , llvm::mustache::Template::Template() , llvm::OpenMPIRBuilder::tileLoops() , llvm::TimerGroup::TimerGroup() , llvm::SDPatternMatch::TLI_pred_match() , llvm::to_address() , llvm::to_address() , llvm::SymbolTableListTraits< ValueSubClass, Args >::toPtr() , llvm::ConvergingVLIWScheduler::traceCandidate() , llvm::GenericSchedulerBase::traceCandidate() , llvm::TrackingVH< ValueTy >::TrackingVH() , slpvectorizer::BoUpSLP::transformNodes() , llvm::orc::DynamicLibrarySearchGenerator::tryToGenerate() , llvm::xray::TypedEventRecord::TypedEventRecord() , typeIsLegalBoolVec() , typeIsLegalIntOrFPVec() , typeIsLegalPtrVec() , llvm::unique() , llvm::unwrap() , unwrap() , unwrap() , unwrap() , llvm::MipsTargetStreamer::updateABIInfo() , llvm::VFShape::updateParam() , llvm::ScheduleDAGMILive::updatePressureDiffs() , llvm::updateVCallVisibilityInIndex() , llvm::cas::ondisk::useSmallMappingSize() , llvm::object::DirectX::PSVRuntimeInfo::usesViewID() , llvm::yaml::MappingTraits< ArchYAML::Archive::Child >::validate() , llvm::X86::validateCPUSpecificCPUDispatch() , valueDominatesPHI() , llvm::GraphTraits< ValueInfo >::valueInfoFromEdge() , llvm::DenseMapBase< DenseMap, KeyT, ValueT, KeyInfoT, BucketT >::values() , llvm::DenseMapBase< DenseMap, KeyT, ValueT, KeyInfoT, BucketT >::values() , llvm::SDPatternMatch::ValueType_match() , llvm::PMDataManager::verifyPreservedAnalysis() , llvm::RegionBase< RegionTraits< MachineFunction > >::verifyRegion() , llvm::Error::visitErrors , llvm::InstCombinerImpl::visitFNeg() , llvm::InstCombinerImpl::visitIntToPtr() , llvm::InstCombinerImpl::visitPtrToInt() , llvm::VPlanPrinter::VPlanPrinter() , llvm::OutlinedHashTree::walkGraph() , llvm::WeakTrackingVH::WeakTrackingVH() , llvm::WeakVH::WeakVH() , llvm::pdb::WithColor::WithColor() , llvm::wrap() , wrap() , wrap() , wrap() , wrap() , wrapPtrIfASNotZero() , llvm::mcdxbc::Signature::write() , llvm::objcopy::dxbc::DXContainerWriter::write() , llvm::StringTableBuilder::write() , llvm::support::endian::write() , llvm::write() , write() , llvm::support::endian::write16() , llvm::support::endian::write16() , llvm::support::endian::write16be() , llvm::support::endian::write16le() , llvm::support::endian::write32() , llvm::support::endian::write32() , llvm::support::endian::write32be() , llvm::support::endian::write32le() , llvm::support::endian::write64() , llvm::support::endian::write64() , llvm::support::endian::write64be() , llvm::support::endian::write64le() , llvm::PGOCtxProfileWriter::writeContextual() , llvm::writeIndex() , writeTypeIdCompatibleVtableSummaryRecord() , X() , llvm::xxHash64() , llvm::yaml::yaml2archive() , llvm::objcarc::BundledRetainClaimRVs::~BundledRetainClaimRVs() , llvm::orc::InProcessMemoryMapper::~InProcessMemoryMapper() , llvm::PMDataManager::~PMDataManager() , llvm::PMTopLevelManager::~PMTopLevelManager() , and llvm::ThreadSafeTrieRawHashMap< DataT, sizeof(HashType)>::~ThreadSafeTrieRawHashMap() . Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#A-bit-harder:-The-Gabor
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Loading-and-showing-images
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://www.php.net/manual/tr/function.crc32.php
PHP: crc32 - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box crypt » « count_chars PHP Kılavuzu İşlev Başvuru Kılavuzu Metin İşleme Strings Dizge İşlevleri Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other crc32 (PHP 4 >= 4.0.1, PHP 5, PHP 7, PHP 8) crc32 — Bir dizgenin crc32 çok terimlisini hesaplar Açıklama crc32 ( string $dizge ): int dizge dizgesinin çevrimsel artıklık sağlaması çok terimlisinin sonucunu hesaplar. Bu genellikle aktarılan bir verinin bütünlük doğrulamasını yapmak için kullanılır. PHP'nin tamsayı türü işaretli olduğundan ve çoğu crc32 sağlaması 32 bitlik platformlarda negatif tamsayılar üretir. işaretsiz crc32 sağlamalarının dizgesel gösterimlerini elde etmek için sprintf() ve printf() işlevlerinde "%u" belirtimini kullanın Uyarı PHP'nin tamsayı türü işaretli olduğundan ve çoğu crc32 sağlaması 32 bitlik kurulumlarda negatif tamsayılar üretir. 64 bitlik kurulumlarda ise tüm crc32() sonuçları positif tamsayılardır. Ondalık biçemdeki işaretsiz crc32() sağlamalarının dizgesel gösterimlerini elde etmek için sprintf() ve printf() işlevlerinde "%u" belirtimi kullanılır. Sağlama toplamının onaltılık gösterimi için, sprintf() veya printf() işlevinin "%x" biçimlendiricisini veya dechex() dönüştürme işlevlerini kullanabilirsiniz; bunların her ikisi de crc32() sonucunu işaretsiz bir tam sayıya dönüştürmeye özen gösterir. 64 bit kurulumlara sahip olunması durumunda, daha büyük sonuç değerleri için negatif tamsayılar döndürülmesi de düşünüldü, ancak bu onaltılık değer dönüştürmeyi bozacaktı, çünkü bu takdirde negatif değerler fazladan 0xFFFFFFFF ######## başlangıcı elde edecekti. Onaltılık gösterim en yaygın kullanım durumu gibi göründüğünden, 32 bitten 64 bite geçiş, doğrudan ondalık karşılaştırmaların yaklaşık %50'sini bozsa bile negatif tamsayılar döndürmemeye karar verdik. Geçmişe bakıldığında, işlevin bir tamsayı döndürmesi belki en iyi fikir değildi ve hemen onaltılık bir dize gösterimi döndürmek ( md5() 'in yaptığı gibi), başlangıç için daha iyi bir plan olabilirdi. Daha taşınabilir bir çözüm için hash() işlevini de düşünebilirsiniz. hash("crc32b", $str) kodu ile, str_pad(dechex(crc32($str)), 8, '0', STR_PAD_LEFT) kodu aynı dizeyi döndürür. Bağımsız Değişkenler dizge Veri. Dönen Değerler dizge dizgesinin crc32 sağlamasını bir tamsayı olarak döndürür. Örnekler Örnek 1 - Bir crc32 sağlamasının gösterilmesi Bu örnekte sağlama toplamını basmak için printf() işlevinin kullanımı gösterilmiştir. <?php $checksum = crc32 ( "The quick brown fox jumped over the lazy dog." ); printf ( "%u\n" , $checksum ); ?> Ayrıca Bakınız hash() - Bir ileti aşı üretir md5() - Bir dizgenin md5 özetini hesaplar sha1() - Bir dizgenin sha1 aşını hesaplar Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes 22 notes up down 24 jian at theorchard dot com ¶ 15 years ago This function returns an unsigned integer from a 64-bit Linux platform. It does return the signed integer from other 32-bit platforms even a 64-bit Windows one. The reason is because the two constants PHP_INT_SIZE and PHP_INT_MAX have different values on the 64-bit Linux platform. I've created a work-around function to handle this situation. <?php function get_signed_int ( $in ) { $int_max = pow ( 2 , 31 )- 1 ; if ( $in > $int_max ){ $out = $in - $int_max * 2 - 2 ; } else { $out = $in ; } return $out ; } ?> Hope this helps. up down 12 i at morfi dot ru ¶ 12 years ago Implementation crc64() in php 64bit <?php /** * @return array */ function crc64Table () { $crc64tab = []; // ECMA polynomial $poly64rev = ( 0xC96C5795 << 32 ) | 0xD7870F42 ; // ISO polynomial // $poly64rev = (0xD8 << 56); for ( $i = 0 ; $i < 256 ; $i ++) { for ( $part = $i , $bit = 0 ; $bit < 8 ; $bit ++) { if ( $part & 1 ) { $part = (( $part >> 1 ) & ~( 0x8 << 60 )) ^ $poly64rev ; } else { $part = ( $part >> 1 ) & ~( 0x8 << 60 ); } } $crc64tab [ $i ] = $part ; } return $crc64tab ; } /** * @param string $string * @param string $format * @return mixed * * Formats: * crc64('php'); // afe4e823e7cef190 * crc64('php', '0x%x'); // 0xafe4e823e7cef190 * crc64('php', '0x%X'); // 0xAFE4E823E7CEF190 * crc64('php', '%d'); // -5772233581471534704 signed int * crc64('php', '%u'); // 12674510492238016912 unsigned int */ function crc64 ( $string , $format = '%x' ) { static $crc64tab ; if ( $crc64tab === null ) { $crc64tab = crc64Table (); } $crc = 0 ; for ( $i = 0 ; $i < strlen ( $string ); $i ++) { $crc = $crc64tab [( $crc ^ ord ( $string [ $i ])) & 0xff ] ^ (( $crc >> 8 ) & ~( 0xff << 56 )); } return sprintf ( $format , $crc ); } up down 10 JS at JavsSys dot Org ¶ 12 years ago The khash() function by sukitsupaluk has two problems, it does not use all 62 characters from the $map set and when corrected it then produces different results on 64-bit compared to 32-bit PHP systems. Here is my modified version : <?php /** * Small sample convert crc32 to character map * Based upon http://www.php.net/manual/en/function.crc32.php#105703 * (Modified to now use all characters from $map) * (Modified to be 32-bit PHP safe) */ function khash ( $data ) { static $map = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" ; $hash = bcadd ( sprintf ( '%u' , crc32 ( $data )) , 0x100000000 ); $str = "" ; do { $str = $map [ bcmod ( $hash , 62 ) ] . $str ; $hash = bcdiv ( $hash , 62 ); } while ( $hash >= 1 ); return $str ; } //----------------------------------------------------------------------------------- $test = array( null , true , false , 0 , "0" , 1 , "1" , "2" , "3" , "ab" , "abc" , "abcd" , "abcde" , "abcdefoo" , "248840027" , "1365848013" , // time() "9223372035488927794" , // PHP_INT_MAX-time() "901131979" , // mt_rand() "Sat, 13 Apr 2013 10:13:33 +0000" // gmdate('r') ); $out = array(); foreach ( $test as $s ) { $out [] = khash ( $s ) . ": " . $s ; } print "<h3>khash() -- maps a crc32 result into a (62-character) result</h3>" ; print '<pre>' ; var_dump ( $out ); print "\n\n\$GLOBALS['raw_crc32']:\n" ; var_dump ( $GLOBALS [ 'raw_crc32' ]); print '</pre><hr>' ; flush (); $pefile = __FILE__ ; print "<h3> $pefile </h3>" ; ob_end_flush (); flush (); highlight_file ( $pefile ); print "<hr>" ; //----------------------------------------------------------------------------------- /* CURRENT output array(19) { [0]=> string(8) "4GFfc4: " [1]=> string(9) "76nO4L: 1" [2]=> string(8) "4GFfc4: " [3]=> string(9) "9aGcIp: 0" [4]=> string(9) "9aGcIp: 0" [5]=> string(9) "76nO4L: 1" [6]=> string(9) "76nO4L: 1" [7]=> string(9) "5b8iNn: 2" [8]=> string(9) "6HmfFN: 3" [9]=> string(10) "7ADPD7: ab" [10]=> string(11) "5F0aUq: abc" [11]=> string(12) "92kWw9: abcd" [12]=> string(13) "78hcpf: abcde" [13]=> string(16) "9eBVPB: abcdefoo" [14]=> string(17) "5TjOuZ: 248840027" [15]=> string(18) "5eNliI: 1365848013" [16]=> string(27) "4Q00e5: 9223372035488927794" [17]=> string(17) "6DUX8V: 901131979" [18]=> string(39) "5i2aOW: Sat, 13 Apr 2013 10:13:33 +0000" } */ //----------------------------------------------------------------------------------- ?> up down 10 Bulk at bulksplace dot com ¶ 20 years ago A faster way I've found to return CRC values of larger files, is instead of using the file()/implode() method used below, is to us file_get_contents() (PHP 4 >= 4.3.0) which uses memory mapping techniques if supported by your OS to enhance performance. Here's my example function: <?php // $file is the path to the file you want to check. function file_crc ( $file ) { $file_string = file_get_contents ( $file ); $crc = crc32 ( $file_string ); return sprintf ( "%u" , $crc ); } $file_to_crc = / home / path / to / file . jpg ; echo file_crc ( $file_to_crc ); // Outputs CRC value for given file. ?> I've found in testing this method is MUCH faster for larger binary files. up down 8 slimshady451 ¶ 18 years ago I see a lot of function for crc32_file, but for php version >= 5.1.2 don't forget you can use this : <?php function crc32_file ( $filename ) { return hash_file ( 'CRC32' , $filename , FALSE ); } ?> Using crc32(file_get_contents($filename)) will use too many memory on big file so don't use it. up down 6 same ¶ 21 years ago bit by bit crc32 computation <?php function bitbybit_crc32 ( $str , $first_call = false ){ //reflection in 32 bits of crc32 polynomial 0x04C11DB7 $poly_reflected = 0xEDB88320 ; //=0xFFFFFFFF; //keep track of register value after each call static $reg = 0xFFFFFFFF ; //initialize register on first call if( $first_call ) $reg = 0xFFFFFFFF ; $n = strlen ( $str ); $zeros = $n < 4 ? $n : 4 ; //xor first $zeros=min(4,strlen($str)) bytes into the register for( $i = 0 ; $i < $zeros ; $i ++) $reg ^= ord ( $str { $i })<< $i * 8 ; //now for the rest of the string for( $i = 4 ; $i < $n ; $i ++){ $next_char = ord ( $str { $i }); for( $j = 0 ; $j < 8 ; $j ++) $reg =(( $reg >> 1 & 0x7FFFFFFF )|( $next_char >> $j & 1 )<< 0x1F ) ^( $reg & 1 )* $poly_reflected ; } //put in enough zeros at the end for( $i = 0 ; $i < $zeros * 8 ; $i ++) $reg =( $reg >> 1 & 0x7FFFFFFF )^( $reg & 1 )* $poly_reflected ; //xor the register with 0xFFFFFFFF return ~ $reg ; } $str = "123456789" ; //whatever $blocksize = 4 ; //whatever for( $i = 0 ; $i < strlen ( $str ); $i += $blocksize ) $crc = bitbybit_crc32 ( substr ( $str , $i , $blocksize ),! $i ); ?> up down 5 dave at jufer dot info ¶ 18 years ago This function returns the same int value on a 64 bit mc. like the crc32() function on a 32 bit mc. <?php function crcKw ( $num ){ $crc = crc32 ( $num ); if( $crc & 0x80000000 ){ $crc ^= 0xffffffff ; $crc += 1 ; $crc = - $crc ; } return $crc ; } ?> up down 3 Clifford dot ct at gmail dot com ¶ 13 years ago The crc32() function can return a signed integer in certain environments. Assuming that it will always return an unsigned integer is not portable. Depending on your desired behavior, you should probably use sprintf() on the result or the generic hash() instead. Also note that integer arithmetic operators do not have the precision to work correctly with the integer output. up down 3 alban dot lopez+php [ at ] gmail dot com ¶ 14 years ago I made this code to verify Transmition with Vantage Pro2 ( weather station ) based on CRC16-CCITT standard. <?php // CRC16-CCITT validator $crc_table = array( 0x0 , 0x1021 , 0x2042 , 0x3063 , 0x4084 , 0x50a5 , 0x60c6 , 0x70e7 , 0x8108 , 0x9129 , 0xa14a , 0xb16b , 0xc18c , 0xd1ad , 0xe1ce , 0xf1ef , 0x1231 , 0x210 , 0x3273 , 0x2252 , 0x52b5 , 0x4294 , 0x72f7 , 0x62d6 , 0x9339 , 0x8318 , 0xb37b , 0xa35a , 0xd3bd , 0xc39c , 0xf3ff , 0xe3de , 0x2462 , 0x3443 , 0x420 , 0x1401 , 0x64e6 , 0x74c7 , 0x44a4 , 0x5485 , 0xa56a , 0xb54b , 0x8528 , 0x9509 , 0xe5ee , 0xf5cf , 0xc5ac , 0xd58d , 0x3653 , 0x2672 , 0x1611 , 0x630 , 0x76d7 , 0x66f6 , 0x5695 , 0x46b4 , 0xb75b , 0xa77a , 0x9719 , 0x8738 , 0xf7df , 0xe7fe , 0xd79d , 0xc7bc , 0x48c4 , 0x58e5 , 0x6886 , 0x78a7 , 0x840 , 0x1861 , 0x2802 , 0x3823 , 0xc9cc , 0xd9ed , 0xe98e , 0xf9af , 0x8948 , 0x9969 , 0xa90a , 0xb92b , 0x5af5 , 0x4ad4 , 0x7ab7 , 0x6a96 , 0x1a71 , 0xa50 , 0x3a33 , 0x2a12 , 0xdbfd , 0xcbdc , 0xfbbf , 0xeb9e , 0x9b79 , 0x8b58 , 0xbb3b , 0xab1a , 0x6ca6 , 0x7c87 , 0x4ce4 , 0x5cc5 , 0x2c22 , 0x3c03 , 0xc60 , 0x1c41 , 0xedae , 0xfd8f , 0xcdec , 0xddcd , 0xad2a , 0xbd0b , 0x8d68 , 0x9d49 , 0x7e97 , 0x6eb6 , 0x5ed5 , 0x4ef4 , 0x3e13 , 0x2e32 , 0x1e51 , 0xe70 , 0xff9f , 0xefbe , 0xdfdd , 0xcffc , 0xbf1b , 0xaf3a , 0x9f59 , 0x8f78 , 0x9188 , 0x81a9 , 0xb1ca , 0xa1eb , 0xd10c , 0xc12d , 0xf14e , 0xe16f , 0x1080 , 0xa1 , 0x30c2 , 0x20e3 , 0x5004 , 0x4025 , 0x7046 , 0x6067 , 0x83b9 , 0x9398 , 0xa3fb , 0xb3da , 0xc33d , 0xd31c , 0xe37f , 0xf35e , 0x2b1 , 0x1290 , 0x22f3 , 0x32d2 , 0x4235 , 0x5214 , 0x6277 , 0x7256 , 0xb5ea , 0xa5cb , 0x95a8 , 0x8589 , 0xf56e , 0xe54f , 0xd52c , 0xc50d , 0x34e2 , 0x24c3 , 0x14a0 , 0x481 , 0x7466 , 0x6447 , 0x5424 , 0x4405 , 0xa7db , 0xb7fa , 0x8799 , 0x97b8 , 0xe75f , 0xf77e , 0xc71d , 0xd73c , 0x26d3 , 0x36f2 , 0x691 , 0x16b0 , 0x6657 , 0x7676 , 0x4615 , 0x5634 , 0xd94c , 0xc96d , 0xf90e , 0xe92f , 0x99c8 , 0x89e9 , 0xb98a , 0xa9ab , 0x5844 , 0x4865 , 0x7806 , 0x6827 , 0x18c0 , 0x8e1 , 0x3882 , 0x28a3 , 0xcb7d , 0xdb5c , 0xeb3f , 0xfb1e , 0x8bf9 , 0x9bd8 , 0xabbb , 0xbb9a , 0x4a75 , 0x5a54 , 0x6a37 , 0x7a16 , 0xaf1 , 0x1ad0 , 0x2ab3 , 0x3a92 , 0xfd2e , 0xed0f , 0xdd6c , 0xcd4d , 0xbdaa , 0xad8b , 0x9de8 , 0x8dc9 , 0x7c26 , 0x6c07 , 0x5c64 , 0x4c45 , 0x3ca2 , 0x2c83 , 0x1ce0 , 0xcc1 , 0xef1f , 0xff3e , 0xcf5d , 0xdf7c , 0xaf9b , 0xbfba , 0x8fd9 , 0x9ff8 , 0x6e17 , 0x7e36 , 0x4e55 , 0x5e74 , 0x2e93 , 0x3eb2 , 0xed1 , 0x1ef0 ); $test = chr ( 0xC6 ). chr ( 0xCE ). chr ( 0xA2 ). chr ( 0x03 ); // CRC16-CCITT = 0xE2B4 genCRC ( $test ); function genCRC (& $ptr ) { $crc = 0x0000 ; $crc_table = $GLOBALS [ 'crc_table' ]; for ( $i = 0 ; $i < strlen ( $ptr ); $i ++) $crc = $crc_table [(( $crc >> 8 ) ^ ord ( $ptr [ $i ]))] ^ (( $crc << 8 ) & 0x00FFFF ); return $crc ; } ?> up down 4 roberto at spadim dot com dot br ¶ 19 years ago MODBUS RTU, CRC16, input-> modbus rtu string output -> 2bytes string, in correct modbus order <?php function crc16 ( $string , $length = 0 ){ $auchCRCHi =array( 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x01 , 0xC0 , 0x80 , 0x41 , 0x00 , 0xC1 , 0x81 , 0x40 ); $auchCRCLo =array( 0x00 , 0xC0 , 0xC1 , 0x01 , 0xC3 , 0x03 , 0x02 , 0xC2 , 0xC6 , 0x06 , 0x07 , 0xC7 , 0x05 , 0xC5 , 0xC4 , 0x04 , 0xCC , 0x0C , 0x0D , 0xCD , 0x0F , 0xCF , 0xCE , 0x0E , 0x0A , 0xCA , 0xCB , 0x0B , 0xC9 , 0x09 , 0x08 , 0xC8 , 0xD8 , 0x18 , 0x19 , 0xD9 , 0x1B , 0xDB , 0xDA , 0x1A , 0x1E , 0xDE , 0xDF , 0x1F , 0xDD , 0x1D , 0x1C , 0xDC , 0x14 , 0xD4 , 0xD5 , 0x15 , 0xD7 , 0x17 , 0x16 , 0xD6 , 0xD2 , 0x12 , 0x13 , 0xD3 , 0x11 , 0xD1 , 0xD0 , 0x10 , 0xF0 , 0x30 , 0x31 , 0xF1 , 0x33 , 0xF3 , 0xF2 , 0x32 , 0x36 , 0xF6 , 0xF7 , 0x37 , 0xF5 , 0x35 , 0x34 , 0xF4 , 0x3C , 0xFC , 0xFD , 0x3D , 0xFF , 0x3F , 0x3E , 0xFE , 0xFA , 0x3A , 0x3B , 0xFB , 0x39 , 0xF9 , 0xF8 , 0x38 , 0x28 , 0xE8 , 0xE9 , 0x29 , 0xEB , 0x2B , 0x2A , 0xEA , 0xEE , 0x2E , 0x2F , 0xEF , 0x2D , 0xED , 0xEC , 0x2C , 0xE4 , 0x24 , 0x25 , 0xE5 , 0x27 , 0xE7 , 0xE6 , 0x26 , 0x22 , 0xE2 , 0xE3 , 0x23 , 0xE1 , 0x21 , 0x20 , 0xE0 , 0xA0 , 0x60 , 0x61 , 0xA1 , 0x63 , 0xA3 , 0xA2 , 0x62 , 0x66 , 0xA6 , 0xA7 , 0x67 , 0xA5 , 0x65 , 0x64 , 0xA4 , 0x6C , 0xAC , 0xAD , 0x6D , 0xAF , 0x6F , 0x6E , 0xAE , 0xAA , 0x6A , 0x6B , 0xAB , 0x69 , 0xA9 , 0xA8 , 0x68 , 0x78 , 0xB8 , 0xB9 , 0x79 , 0xBB , 0x7B , 0x7A , 0xBA , 0xBE , 0x7E , 0x7F , 0xBF , 0x7D , 0xBD , 0xBC , 0x7C , 0xB4 , 0x74 , 0x75 , 0xB5 , 0x77 , 0xB7 , 0xB6 , 0x76 , 0x72 , 0xB2 , 0xB3 , 0x73 , 0xB1 , 0x71 , 0x70 , 0xB0 , 0x50 , 0x90 , 0x91 , 0x51 , 0x93 , 0x53 , 0x52 , 0x92 , 0x96 , 0x56 , 0x57 , 0x97 , 0x55 , 0x95 , 0x94 , 0x54 , 0x9C , 0x5C , 0x5D , 0x9D , 0x5F , 0x9F , 0x9E , 0x5E , 0x5A , 0x9A , 0x9B , 0x5B , 0x99 , 0x59 , 0x58 , 0x98 , 0x88 , 0x48 , 0x49 , 0x89 , 0x4B , 0x8B , 0x8A , 0x4A , 0x4E , 0x8E , 0x8F , 0x4F , 0x8D , 0x4D , 0x4C , 0x8C , 0x44 , 0x84 , 0x85 , 0x45 , 0x87 , 0x47 , 0x46 , 0x86 , 0x82 , 0x42 , 0x43 , 0x83 , 0x41 , 0x81 , 0x80 , 0x40 ); $length =( $length <= 0 ? strlen ( $string ): $length ); $uchCRCHi = 0xFF ; $uchCRCLo = 0xFF ; $uIndex = 0 ; for ( $i = 0 ; $i < $length ; $i ++){ $uIndex = $uchCRCLo ^ ord ( substr ( $string , $i , 1 )); $uchCRCLo = $uchCRCHi ^ $auchCRCHi [ $uIndex ]; $uchCRCHi = $auchCRCLo [ $uIndex ] ; } return( chr ( $uchCRCLo ). chr ( $uchCRCHi )); } ?> up down 2 arachnid at notdot dot net ¶ 21 years ago Note that the CRC32 algorithm should NOT be used for cryptographic purposes, or in situations where a hostile/untrusted user is involved, as it is far too easy to generate a hash collision for CRC32 (two different binary strings that have the same CRC32 hash). Instead consider SHA-1 or MD5. up down 2 sukitsupaluk at hotmail dot com ¶ 14 years ago small sample convert crc32 to character map <?php function khash ( $data ) { static $map = "0123456789abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ" ; $hash = crc32 ( $data )+ 0x100000000 ; $str = "" ; do { $str = $map [ 31 + ( $hash % 31 )] . $str ; $hash /= 31 ; } while( $hash >= 1 ); return $str ; } $test = array( null , TRUE , FALSE , 0 , "0" , 1 , "1" , "2" , "3" , "ab" , "abc" , "abcd" , "abcde" , "abcdefoo" ); $out = array(); foreach( $test as $s ) { $out []= khash ( $s ). ": " . $s ; } var_dump ( $out ); /* output: array 0 => string 'zVvOYTv: ' (length=9) 1 => string 'xKDKKL8: 1' (length=10) 2 => string 'zVvOYTv: ' (length=9) 3 => string 'zOKCQxh: 0' (length=10) 4 => string 'zOKCQxh: 0' (length=10) 5 => string 'xKDKKL8: 1' (length=10) 6 => string 'xKDKKL8: 1' (length=10) 7 => string 'AFSzIAO: 2' (length=10) 8 => string 'BXGSvQJ: 3' (length=10) 9 => string 'xZWOQSu: ab' (length=11) 10 => string 'AVAwHOR: abc' (length=12) 11 => string 'zKASNE1: abcd' (length=13) 12 => string 'xLCTOV7: abcde' (length=14) 13 => string 'zQLzKMt: abcdefoo' (length=17) */ ?> up down 1 chernyshevsky at hotmail dot com ¶ 15 years ago The crc32_combine() function provided by petteri at qred dot fi has a bug that causes an infinite loop, a shift operation on a 32-bit signed int might never reach zero. Replacing the function gf2_matrix_times() with the following seems to fix it: <?php function gf2_matrix_times ( $mat , $vec ) { $sum = 0 ; $i = 0 ; while ( $vec ) { if ( $vec & 1 ) { $sum ^= $mat [ $i ]; } $vec = ( $vec >> 1 ) & 0x7FFFFFFF ; $i ++; } return $sum ; } ?> Otherwise, it's probably the best solution if you can't use hash_file(). Using a 1meg read buffer, the function only takes twice as long to process a 300meg files than hash_file() in my test. up down 1 berna (at) gensis (dot) com (dot) br ¶ 16 years ago For those who want a more familiar return value for the function: <?php function strcrc32 ( $text ) { $crc = crc32 ( $text ); if ( $crc & 0x80000000 ) { $crc ^= 0xffffffff ; $crc += 1 ; $crc = - $crc ; } return $crc ; } ?> And to show the result in Hex string: <?php function int32_to_hex ( $value ) { $value &= 0xffffffff ; return str_pad ( strtoupper ( dechex ( $value )), 8 , "0" , STR_PAD_LEFT ); } ?> up down 1 arris at zsolttech dot com ¶ 14 years ago not found anywhere crc64 based on http://bioinfadmin.cs.ucl.ac.uk/downloads/crc64/crc64.c . (use gmp module) <?php /* OLDCRC */ define ( 'POLY64REV' , "d800000000000000" ); define ( 'INITIALCRC' , "0000000000000000" ); define ( 'TABLELEN' , 256 ); /* NEWCRC */ // define('POLY64REV', "95AC9329AC4BC9B5"); // define('INITIALCRC', "FFFFFFFFFFFFFFFF"); if( function_exists ( 'gmp_init' )){ class CRC64 { private static $CRCTable = array(); public static function encode ( $seq ){ $crc = gmp_init ( INITIALCRC , 16 ); $init = FALSE ; $poly64rev = gmp_init ( POLY64REV , 16 ); if (! $init ) { $init = TRUE ; for ( $i = 0 ; $i < TABLELEN ; $i ++) { $part = gmp_init ( $i , 10 ); for ( $j = 0 ; $j < 8 ; $j ++) { if ( gmp_strval ( gmp_and ( $part , "0x1" )) != "0" ){ // if (gmp_testbit($part, 1)){ /* PHP 5 >= 5.3.0, untested */ $part = gmp_xor ( gmp_div_q ( $part , "2" ), $poly64rev ); } else { $part = gmp_div_q ( $part , "2" ); } } self :: $CRCTable [ $i ] = $part ; } } for( $k = 0 ; $k < strlen ( $seq ); $k ++){ $tmp_gmp_val = gmp_init ( ord ( $seq [ $k ]), 10 ); $tableindex = gmp_xor ( gmp_and ( $crc , "0xff" ), $tmp_gmp_val ); $crc = gmp_div_q ( $crc , "256" ); $crc = gmp_xor ( $crc , self :: $CRCTable [ gmp_strval ( $tableindex , 10 )]); } $res = gmp_strval ( $crc , 16 ); return $res ; } } } else { die( "Please install php-gmp package!!!" ); } ?> up down 1 quix at free dot fr ¶ 22 years ago I needed the crc32 of a file that was pretty large, so I didn't want to read it into memory. So I made this: <?php $GLOBALS [ '__crc32_table' ]=array(); // Lookup table array __crc32_init_table (); function __crc32_init_table () { // Builds lookup table array // This is the official polynomial used by // CRC-32 in PKZip, WinZip and Ethernet. $polynomial = 0x04c11db7 ; // 256 values representing ASCII character codes. for( $i = 0 ; $i <= 0xFF ;++ $i ) { $GLOBALS [ '__crc32_table' ][ $i ]=( __crc32_reflect ( $i , 8 ) << 24 ); for( $j = 0 ; $j < 8 ;++ $j ) { $GLOBALS [ '__crc32_table' ][ $i ]=(( $GLOBALS [ '__crc32_table' ][ $i ] << 1 ) ^ (( $GLOBALS [ '__crc32_table' ][ $i ] & ( 1 << 31 ))? $polynomial : 0 )); } $GLOBALS [ '__crc32_table' ][ $i ] = __crc32_reflect ( $GLOBALS [ '__crc32_table' ][ $i ], 32 ); } } function __crc32_reflect ( $ref , $ch ) { // Reflects CRC bits in the lookup table $value = 0 ; // Swap bit 0 for bit 7, bit 1 for bit 6, etc. for( $i = 1 ; $i <( $ch + 1 );++ $i ) { if( $ref & 1 ) $value |= ( 1 << ( $ch - $i )); $ref = (( $ref >> 1 ) & 0x7fffffff ); } return $value ; } function __crc32_string ( $text ) { // Creates a CRC from a text string // Once the lookup table has been filled in by the two functions above, // this function creates all CRCs using only the lookup table. // You need unsigned variables because negative values // introduce high bits where zero bits are required. // PHP doesn't have unsigned integers: // I've solved this problem by doing a '&' after a '>>'. // Start out with all bits set high. $crc = 0xffffffff ; $len = strlen ( $text ); // Perform the algorithm on each character in the string, // using the lookup table values. for( $i = 0 ; $i < $len ;++ $i ) { $crc =(( $crc >> 8 ) & 0x00ffffff ) ^ $GLOBALS [ '__crc32_table' ][( $crc & 0xFF ) ^ ord ( $text { $i })]; } // Exclusive OR the result with the beginning value. return $crc ^ 0xffffffff ; } function __crc32_file ( $name ) { // Creates a CRC from a file // Info: look at __crc32_string // Start out with all bits set high. $crc = 0xffffffff ; if(( $fp = fopen ( $name , 'rb' ))=== false ) return false ; // Perform the algorithm on each character in file for(;;) { $i =@ fread ( $fp , 1 ); if( strlen ( $i )== 0 ) break; $crc =(( $crc >> 8 ) & 0x00ffffff ) ^ $GLOBALS [ '__crc32_table' ][( $crc & 0xFF ) ^ ord ( $i )]; } @ fclose ( $fp ); // Exclusive OR the result with the beginning value. return $crc ^ 0xffffffff ; } ?> up down 1 spectrumizer at cycos dot net ¶ 23 years ago Here is a tested and working CRC16-Algorithm: <?php function crc16 ( $string ) { $crc = 0xFFFF ; for ( $x = 0 ; $x < strlen ( $string ); $x ++) { $crc = $crc ^ ord ( $string [ $x ]); for ( $y = 0 ; $y < 8 ; $y ++) { if (( $crc & 0x0001 ) == 0x0001 ) { $crc = (( $crc >> 1 ) ^ 0xA001 ); } else { $crc = $crc >> 1 ; } } } return $crc ; } ?> Regards, Mario up down 1 Ren ¶ 18 years ago Dealing with 32 bit unsigned values overflowing 32 bit php signed values can be done by adding 0x10000000 to any unexpected negative result, rather than using sprintf. $i = crc32('1'); printf("%u\n", $i); if (0 > $i) { // Implicitly casts i as float, and corrects this sign. $i += 0x100000000; } var_dump($i); Outputs: 2212294583 float(2212294583) up down 0 dotg at mail dot ru ¶ 9 years ago crc32() on php 32bit and 64 bit not equal in some values i use abs for result in positive for 32 bit not equal <?=abs ( crc32 ( 1 )); ?> 64 bit 2212294583 32 bit 2082672713 equal <?=abs ( crc32 ( 3 )); ?> 64 bit 1842515611 32 bit 1842515611 up down 0 toggio at writeme dot com ¶ 9 years ago A faster implementation of modbus CRC16 function crc16($data) { $crc = 0xFFFF; for ($i = 0; $i < strlen($data); $i++) { $crc ^=ord($data[$i]); for ($j = 8; $j !=0; $j--) { if (($crc & 0x0001) !=0) { $crc >>= 1; $crc ^= 0xA001; } else $crc >>= 1; } } return $crc; } up down -1 mail at tristansmis dot nl ¶ 18 years ago I used the abs value of this function on a 32-bit system. When porting the code to a 64-bit system I’ve found that the value is different. The following code has the same outcome on both systems. <?php $crc = abs ( crc32 ( $string )); if( $crc & 0x80000000 ){ $crc ^= 0xffffffff ; $crc += 1 ; } /* Old solution * $crc = abs(crc32($string)) */ ?> up down -2 gabri dot ns at gmail dot com ¶ 15 years ago if you are looking for a fast function to hash a file, take a look at http://www.php.net/manual/en/function.hash-file.php this is crc32 file checker based on a CRC32 guide it have performance at ~ 625 KB/s on my 2.2GHz Turion far slower than hash_file('crc32b','filename.ext') <?php function crc32_file ( $filename ) { $f = @ fopen ( $filename , 'rb' ); if (! $f ) return false ; static $CRC32Table , $Reflect8Table ; if (!isset( $CRC32Table )) { $Polynomial = 0x04c11db7 ; $topBit = 1 << 31 ; for( $i = 0 ; $i < 256 ; $i ++) { $remainder = $i << 24 ; for ( $j = 0 ; $j < 8 ; $j ++) { if ( $remainder & $topBit ) $remainder = ( $remainder << 1 ) ^ $Polynomial ; else $remainder = $remainder << 1 ; } $CRC32Table [ $i ] = $remainder ; if (isset( $Reflect8Table [ $i ])) continue; $str = str_pad ( decbin ( $i ), 8 , '0' , STR_PAD_LEFT ); $num = bindec ( strrev ( $str )); $Reflect8Table [ $i ] = $num ; $Reflect8Table [ $num ] = $i ; } } $remainder = 0xffffffff ; while ( $data = fread ( $f , 1024 )) { $len = strlen ( $data ); for ( $i = 0 ; $i < $len ; $i ++) { $byte = $Reflect8Table [ ord ( $data [ $i ])]; $index = (( $remainder >> 24 ) & 0xff ) ^ $byte ; $crc = $CRC32Table [ $index ]; $remainder = ( $remainder << 8 ) ^ $crc ; } } $str = decbin ( $remainder ); $str = str_pad ( $str , 32 , '0' , STR_PAD_LEFT ); $remainder = bindec ( strrev ( $str )); return $remainder ^ 0xffffffff ; } ?> <?php $a = microtime (); echo dechex ( crc32_file ( 'filename.ext' )). "\n" ; $b = microtime (); echo array_sum ( explode ( ' ' , $b )) - array_sum ( explode ( ' ' , $a )). "\n" ; ?> Output: ec7369fe 2.384134054184 (or similiar) + add a note Dizge İşlevleri addcslashes addslashes bin2hex chop chr chunk_​split convert_​uudecode convert_​uuencode count_​chars crc32 crypt echo explode fprintf get_​html_​translation_​table hebrev hex2bin html_​entity_​decode htmlentities htmlspecialchars htmlspecialchars_​decode implode join lcfirst levenshtein localeconv ltrim md5 md5_​file metaphone nl_​langinfo nl2br number_​format ord parse_​str print printf quoted_​printable_​decode quoted_​printable_​encode quotemeta rtrim setlocale sha1 sha1_​file similar_​text soundex sprintf sscanf str_​contains str_​decrement str_​ends_​with str_​getcsv str_​increment str_​ireplace str_​pad str_​repeat str_​replace str_​rot13 str_​shuffle str_​split str_​starts_​with str_​word_​count strcasecmp strchr strcmp strcoll strcspn strip_​tags stripcslashes stripos stripslashes stristr strlen strnatcasecmp strnatcmp strncasecmp strncmp strpbrk strpos strrchr strrev strripos strrpos strspn strstr strtok strtolower strtoupper strtr substr substr_​compare substr_​count substr_​replace trim ucfirst ucwords vfprintf vprintf vsprintf wordwrap Deprecated convert_​cyr_​string hebrevc money_​format utf8_​decode utf8_​encode Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google
2026-01-13T09:30:39
https://github.com/docker/build-push-action
GitHub - docker/build-push-action: GitHub Action to build and push Docker images with Buildx Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Platform AI CODE CREATION GitHub Copilot Write better code with AI GitHub Spark Build and deploy intelligent apps GitHub Models Manage and compare prompts MCP Registry New Integrate external tools DEVELOPER WORKFLOWS Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes APPLICATION SECURITY GitHub Advanced Security Find and fix vulnerabilities Code security Secure your code as you build Secret protection Stop leaks before they start EXPLORE Why GitHub Documentation Blog Changelog Marketplace View all features Solutions BY COMPANY SIZE Enterprises Small and medium teams Startups Nonprofits BY USE CASE App Modernization DevSecOps DevOps CI/CD View all use cases BY INDUSTRY Healthcare Financial services Manufacturing Government View all industries View all solutions Resources EXPLORE BY TOPIC AI Software Development DevOps Security View all topics EXPLORE BY TYPE Customer stories Events & webinars Ebooks & reports Business insights GitHub Skills SUPPORT & SERVICES Documentation Customer support Community forum Trust center Partners Open Source COMMUNITY GitHub Sponsors Fund open source developers PROGRAMS Security Lab Maintainer Community Accelerator Archive Program REPOSITORIES Topics Trending Collections Enterprise ENTERPRISE SOLUTIONS Enterprise platform AI-powered developer platform AVAILABLE ADD-ONS GitHub Advanced Security Enterprise-grade security features Copilot for Business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... --> Search Clear Search syntax tips Provide feedback --> We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly --> Name Query To see all available qualifiers, see our documentation . Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} docker / build-push-action Public Notifications You must be signed in to change notification settings Fork 697 Star 5.1k GitHub Action to build and push Docker images with Buildx github.com/marketplace/actions/build-and-push-docker-images License Apache-2.0 license 5.1k stars 697 forks Branches Tags Activity Star Notifications You must be signed in to change notification settings Code Issues 41 Pull requests 12 Discussions Actions Projects 0 Security Uh oh! There was an error while loading. Please reload this page . Insights Additional navigation options Code Issues Pull requests Discussions Actions Projects Security Insights docker/build-push-action Use this GitHub action with your project Add this Action to an existing workflow or create a new one View on Marketplace   master Branches Tags Go to file Code Open more actions menu Folders and files Name Name Last commit message Last commit date Latest commit   History 1,065 Commits .github .github     .yarn/ plugins/ @yarnpkg .yarn/ plugins/ @yarnpkg     __mocks__/ @actions __mocks__/ @actions     __tests__ __tests__     dist dist     src src     test test     .dockerignore .dockerignore     .editorconfig .editorconfig     .gitattributes .gitattributes     .gitignore .gitignore     .prettierignore .prettierignore     .prettierrc.json .prettierrc.json     .yarnrc.yml .yarnrc.yml     LICENSE LICENSE     README.md README.md     TROUBLESHOOTING.md TROUBLESHOOTING.md     action.yml action.yml     codecov.yml codecov.yml     dev.Dockerfile dev.Dockerfile     docker-bake.hcl docker-bake.hcl     eslint.config.js eslint.config.js     jest.config.js jest.config.js     package.json package.json     tsconfig.json tsconfig.json     yarn.lock yarn.lock     View all files Repository files navigation README Code of conduct Contributing Apache-2.0 license Security About GitHub Action to build and push Docker images with Buildx with full support of the features provided by Moby BuildKit builder toolkit. This includes multi-platform build, secrets, remote cache, etc. and different builder deployment/namespacing options. Usage Git context Path context Examples Summaries Customizing inputs outputs environment variables Troubleshooting Contributing Usage In the examples below we are also using 3 other actions: setup-buildx action will create and boot a builder using by default the docker-container driver . This is not required but recommended using it to be able to build multi-platform images, export cache, etc. setup-qemu action can be useful if you want to add emulation support with QEMU to be able to build against more platforms. login action will take care to log in against a Docker registry. Git context By default, this action uses the Git context , so you don't need to use the actions/checkout action to check out the repository as this will be done directly by BuildKit . The git reference will be based on the event that triggered your workflow and will result in the following context: https://github.com/<owner>/<repo>.git#<ref> . name : ci on : push : jobs : docker : runs-on : ubuntu-latest steps : - name : Login to Docker Hub uses : docker/login-action@v3 with : username : ${{ vars.DOCKERHUB_USERNAME }} password : ${{ secrets.DOCKERHUB_TOKEN }} - name : Set up QEMU uses : docker/setup-qemu-action@v3 - name : Set up Docker Buildx uses : docker/setup-buildx-action@v3 - name : Build and push uses : docker/build-push-action@v6 with : push : true tags : user/app:latest Be careful because any file mutation in the steps that precede the build step will be ignored, including processing of the .dockerignore file since the context is based on the Git reference. However, you can use the Path context using the context input alongside the actions/checkout action to remove this restriction. Default Git context can also be provided using the Handlebars template expression {{defaultContext}} . Here we can use it to provide a subdirectory to the default Git context: - name : Build and push uses : docker/build-push-action@v6 with : context : " {{defaultContext}}:mysubdir " push : true tags : user/app:latest Building from the current repository automatically uses the GitHub Token , so it does not need to be passed. If you want to authenticate against another private repository, you have to use a secret named GIT_AUTH_TOKEN to be able to authenticate against it with Buildx: - name : Build and push uses : docker/build-push-action@v6 with : push : true tags : user/app:latest secrets : | GIT_AUTH_TOKEN=${{ secrets.MYTOKEN }} Path context name : ci on : push : jobs : docker : runs-on : ubuntu-latest steps : - name : Checkout uses : actions/checkout@v5 - name : Login to Docker Hub uses : docker/login-action@v3 with : username : ${{ vars.DOCKERHUB_USERNAME }} password : ${{ secrets.DOCKERHUB_TOKEN }} - name : Set up QEMU uses : docker/setup-qemu-action@v3 - name : Set up Docker Buildx uses : docker/setup-buildx-action@v3 - name : Build and push uses : docker/build-push-action@v6 with : context : . push : true tags : user/app:latest Examples Multi-platform image Secrets Push to multi-registries Manage tags and labels Cache management Export to Docker Test before push Validating build configuration Local registry Share built image between jobs Named contexts Copy image between registries Update Docker Hub repo description SBOM and provenance attestations Annotations Reproducible builds Summaries This action generates a job summary that provides a detailed overview of the build execution. The summary shows an overview of all the steps executed during the build, including the build inputs and eventual errors. The summary also includes a link for downloading the build record with additional details about the build, including build stats, logs, outputs, and more. The build record can be imported to Docker Desktop for inspecting the build in greater detail. Warning If you're using the actions/download-artifact action in your workflow, you need to ignore the build record artifacts if name and pattern inputs are not specified ( defaults to download all artifacts of the workflow), otherwise the action will fail: - uses : actions/download-artifact@v4 with : pattern : " !*.dockerbuild " More info: actions/toolkit#1874 Summaries are enabled by default, but can be disabled with the DOCKER_BUILD_SUMMARY environment variable . For more information about summaries, refer to the documentation . Customizing inputs The following inputs can be used as step.with keys: List type is a newline-delimited string cache-from : | user/app:cache type=local,src=path/to/dir CSV type is a comma-delimited string tags : name/app:latest,name/app:1.0.0 Name Type Description add-hosts List/CSV List of customs host-to-IP mapping (e.g., docker:10.180.0.1 ) allow List/CSV List of extra privileged entitlement (e.g., network.host,security.insecure ) annotations List List of annotation to set to the image attests List List of attestation parameters (e.g., type=sbom,generator=image ) builder String Builder instance (see setup-buildx action) build-args List List of build-time variables build-contexts List List of additional build contexts (e.g., name=path ) cache-from List List of external cache sources (e.g., type=local,src=path/to/dir ) cache-to List List of cache export destinations (e.g., type=local,dest=path/to/dir ) call String Set method for evaluating build (e.g., check ) cgroup-parent String Optional parent cgroup for the container used in the build context String Build's context is the set of files located in the specified PATH or URL (default Git context ) file String Path to the Dockerfile. (default {context}/Dockerfile ) labels List List of metadata for an image load Bool Load is a shorthand for --output=type=docker (default false ) network String Set the networking mode for the RUN instructions during build no-cache Bool Do not use cache when building the image (default false ) no-cache-filters List/CSV Do not cache specified stages outputs List List of output destinations (format: type=local,dest=path ) platforms List/CSV List of target platforms for build provenance Bool/String Generate provenance attestation for the build (shorthand for --attest=type=provenance ) pull Bool Always attempt to pull all referenced images (default false ) push Bool Push is a shorthand for --output=type=registry (default false ) sbom Bool/String Generate SBOM attestation for the build (shorthand for --attest=type=sbom ) secrets List List of secrets to expose to the build (e.g., key=string , GIT_AUTH_TOKEN=mytoken ) secret-envs List/CSV List of secret env vars to expose to the build (e.g., key=envname , MY_SECRET=MY_ENV_VAR ) secret-files List List of secret files to expose to the build (e.g., key=filename , MY_SECRET=./secret.txt ) shm-size String Size of /dev/shm (e.g., 2g ) ssh List List of SSH agent socket or keys to expose to the build tags List/CSV List of tags target String Sets the target stage to build ulimit List Ulimit options (e.g., nofile=1024:1024 ) github-token String GitHub Token used to authenticate against a repository for Git context (default ${{ github.token }} ) outputs The following outputs are available: Name Type Description imageid String Image ID digest String Image digest metadata JSON Build result metadata environment variables Name Type Default Description DOCKER_BUILD_CHECKS_ANNOTATIONS Bool true If false , GitHub annotations are not generated for build checks DOCKER_BUILD_SUMMARY Bool true If false , build summary generation is disabled DOCKER_BUILD_RECORD_UPLOAD Bool true If false , build record upload as GitHub artifact is disabled DOCKER_BUILD_RECORD_RETENTION_DAYS Number Duration after which build record artifact will expire in days. Defaults to repository/org retention settings if unset or 0 DOCKER_BUILD_EXPORT_LEGACY Bool false If true , exports build using legacy export-build tool instead of buildx history export command Troubleshooting See TROUBLESHOOTING.md Contributing Want to contribute? Awesome! You can find information about contributing to this project in the CONTRIBUTING.md About GitHub Action to build and push Docker images with Buildx github.com/marketplace/actions/build-and-push-docker-images Topics docker dockerhub github-actions buildx github-actions-docker Resources Readme License Apache-2.0 license Code of conduct Code of conduct Contributing Contributing Security policy Security policy Uh oh! There was an error while loading. Please reload this page . Activity Custom properties Stars 5.1k stars Watchers 37 watching Forks 697 forks Report repository Releases 58 v6.18.0 Latest May 27, 2025 + 57 releases Packages 0       Uh oh! There was an error while loading. Please reload this page . Used by 771k + 771,171 Contributors 44 + 30 contributors Languages TypeScript 87.1% Dockerfile 7.1% JavaScript 3.8% HCL 1.7% Go 0.3% Footer © 2026 GitHub, Inc. Footer navigation Terms Privacy Security Status Community Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time.
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/movies-and-television/
TIME for Kids | Movies and Television | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Movies and Television Entertainment Forecasting Family August 22, 2025 Lily Hunter is brave and curious. She is 8 years old. And she is the main character on Weather Hunters. It is a new series from PBS Kids®. Lily and her family explore weather. Viewers will learn about rain… Audio Entertainment Good Night! May 29, 2020 The Not-Too-Late Show with Elmo is a new talk show. It is hosted by the popular Sesame Street character. In each episode, Elmo gets ready for bed. He does this while chatting with special guests. The show premiered this week.… Audio Arts It's Showtime March 20, 2020 Think about your favorite movie or TV show. Do you know how it was made? Many people work together to bring movies and TV shows to the screen. Read on to find out about their jobs. Lights, camera, action! Writing… Audio Spanish Arts The Art of Making Sound March 20, 2020 Movies are filled with sound. You hear actors talk. You hear music. You also hear sound effects. These are especially important in animated films. People who make sound effects are called Foley artists. They use their imagination and lots of… Audio Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/dir_32453792af2ba70c54e3ccae3a790d1b.html
LLVM: include/llvm/ADT Directory Reference LLVM  22.0.0git include llvm ADT ADT Directory Reference Directory dependency graph for ADT: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. Files   AddressRanges.h   ADL.h   AllocatorList.h   Any.h   This file provides Any, a non-template class modeled in the spirit of std::any.   APFixedPoint.h   Defines the fixed point number interface.   APFloat.h   This file declares a class to represent arbitrary precision floating point values and provide a variety of arithmetic operations on them.   APInt.h   This file implements a class to represent arbitrary precision integral constant values and operations on them.   APSInt.h   This file implements the APSInt class, which is a simple class that represents an arbitrary sized integer that knows its signedness.   ArrayRef.h   bit.h   This file implements the C++20 <bit> header.   Bitfields.h   This file implements methods to test, set and extract typed bits from packed unsigned integers.   BitmaskEnum.h   Bitset.h   BitVector.h   This file implements the BitVector class.   BreadthFirstIterator.h   This file builds on the ADT/GraphTraits.h file to build a generic breadth first graph iterator.   CachedHashString.h   This file defines CachedHashString and CachedHashStringRef.   CoalescingBitVector.h   A bitvector that uses an IntervalMap to coalesce adjacent elements into intervals.   CombinationGenerator.h   Combination generator.   ConcurrentHashtable.h   DAGDeltaAlgorithm.h   DeltaAlgorithm.h   DeltaTree.h   DenseMap.h   This file defines the DenseMap class.   DenseMapInfo.h   This file defines DenseMapInfo traits for DenseMap.   DenseMapInfoVariant.h   This file defines DenseMapInfo traits for DenseMap<std::variant<Ts...>>.   DenseSet.h   This file defines the DenseSet and SmallDenseSet classes.   DepthFirstIterator.h   This file builds on the ADT/GraphTraits.h file to build generic depth first graph iterator.   DirectedGraph.h   This file defines the interface and a base class implementation for a directed graph.   DynamicAPInt.h   edit_distance.h   This file defines a Levenshtein distance function that works for any two sequences, with each element of each sequence being analogous to a character in a string.   EnumeratedArray.h   This file defines an array type that can be indexed using scoped enum values.   EpochTracker.h   This file defines the DebugEpochBase and DebugEpochBase::HandleBase classes.   EquivalenceClasses.h   Generic implementation of equivalence classes through the use Tarjan's efficient union-find algorithm.   fallible_iterator.h   FloatingPointMode.h   Utilities for dealing with flags related to floating point properties and mode controls.   FoldingSet.h   This file defines a hash set that can be used to remove duplication of nodes in a graph.   FunctionExtras.h   This file provides a collection of function (or more generally, callable) type erasure utilities supplementing those provided by the standard library in <function> .   GenericConvergenceVerifier.h   A verifier for the static rules of convergence control tokens that works with both LLVM IR and MIR.   GenericCycleImpl.h   This template implementation resides in a separate file so that it does not get injected into every .cpp file that includes the generic header.   GenericCycleInfo.h   Find all cycles in a control-flow graph, including irreducible loops.   GenericSSAContext.h   This file defines the little GenericSSAContext<X> template class that can be used to implement IR analyses as templates.   GenericUniformityImpl.h   Implementation of uniformity analysis.   GenericUniformityInfo.h   GraphTraits.h   This file defines the little GraphTraits<X> template class that should be specialized by classes that want to be iteratable by generic graph iterators.   Hashing.h   ilist.h   This file defines classes to implement an intrusive doubly linked list class (i.e.   ilist_base.h   ilist_iterator.h   ilist_node.h   This file defines the ilist_node class template, which is a convenient base class for creating classes that can be used with ilists.   ilist_node_base.h   ilist_node_options.h   ImmutableList.h   This file defines the ImmutableList class.   ImmutableMap.h   This file defines the ImmutableMap class.   ImmutableSet.h   This file defines the ImutAVLTree and ImmutableSet classes.   IndexedMap.h   This file implements an indexed map.   IntEqClasses.h   Equivalence classes for small integers.   IntervalMap.h   This file implements a coalescing interval map for small objects.   IntervalTree.h   IntrusiveRefCntPtr.h   This file defines the RefCountedBase, ThreadSafeRefCountedBase, and IntrusiveRefCntPtr classes.   iterator.h   iterator_range.h   This provides a very simple, boring adaptor for a begin and end iterator into a range type.   LazyAtomicPointer.h   MapVector.h   This file implements a map that provides insertion order iteration.   PackedVector.h   This file implements the PackedVector class.   PagedVector.h   PointerEmbeddedInt.h   PointerIntPair.h   This file defines the PointerIntPair class.   PointerSumType.h   PointerUnion.h   This file defines the PointerUnion class, which is a discriminated union of pointer types.   PostOrderIterator.h   This file builds on the ADT/GraphTraits.h file to build a generic graph post order iterator.   PriorityQueue.h   This file defines the PriorityQueue class.   PriorityWorklist.h   This file provides a priority worklist.   RadixTree.h   RewriteBuffer.h   RewriteRope.h   SCCIterator.h   This builds on the llvm/ADT/GraphTraits.h file to find the strongly connected components (SCCs) of a graph in O(N+E) time using Tarjan's DFS algorithm.   ScopedHashTable.h   ScopeExit.h   This file defines the make_scope_exit function, which executes user-defined cleanup logic at scope exit.   Sequence.h   Provides some synthesis utilities to produce sequences of values.   SetOperations.h   This file defines generic set operations that may be used on set's of different types, and different element types.   SetVector.h   This file implements a set that has insertion order iteration characteristics.   simple_ilist.h   SlowDynamicAPInt.h   SmallBitVector.h   This file implements the SmallBitVector class.   SmallPtrSet.h   This file defines the SmallPtrSet class.   SmallSet.h   This file defines the SmallSet class.   SmallString.h   This file defines the SmallString class.   SmallVector.h   This file defines the SmallVector class.   SmallVectorExtras.h   This file defines less commonly used SmallVector utilities.   SparseBitVector.h   This file defines the SparseBitVector class.   SparseMultiSet.h   This file defines the SparseMultiSet class, which adds multiset behavior to the SparseSet.   SparseSet.h   This file defines the SparseSet class derived from the version described in Briggs, Torczon, "An efficient representation for sparse sets", ACM Letters on Programming Languages and Systems, Volume 2 Issue 1-4, March-Dec.   StableHashing.h   Statistic.h   This file defines the 'Statistic' class, which is designed to be an easy way to expose various metrics from passes.   STLExtras.h   This file contains some templates that are useful if you are working with the STL at all.   STLForwardCompat.h   This file contains library features backported from future STL versions.   STLFunctionalExtras.h   StringExtras.h   This file contains some functions that are useful when dealing with strings.   StringMap.h   This file defines the StringMap class.   StringMapEntry.h   This file defines the StringMapEntry class - it is intended to be a low dependency implementation detail of StringMap that is more suitable for inclusion in public headers than StringMap.h itself is.   StringRef.h   StringSet.h   StringSet - A set-like wrapper for the StringMap.   StringSwitch.h   This file implements the StringSwitch template, which mimics a switch() statement whose cases are string literals.   StringTable.h   TinyPtrVector.h   TrieHashIndexGenerator.h   TrieRawHashMap.h   Twine.h   TypeSwitch.h   This file implements the TypeSwitch template, which mimics a switch() statement whose cases are type names.   Uniformity.h   UniqueVector.h Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://wiki.php.net/rfc/hash_pbkdf2?do=#request_for_commentsadding_hash_pbkdf2_function
PHP: rfc:hash_pbkdf2 Login Register You are here: start › rfc › hash_pbkdf2 rfc:hash_pbkdf2 Request for Comments: Adding hash_pbkdf2 Function Version: 1.0 Date: 2012-06-13 Author: Anthony Ferrara ircmaxell@php.net Status: Implemented First Published at: http://wiki.php.net/rfc/hash_pbkdf2 This RFC proposes adding a hash_pbkdf2 function to the hash package Introduction The purpose of this RFC is to add the PBKDF2 algorithm to the available hashing functions as a C implementation. Why do we need PBKDF2? PBKDF2 is defined in RFC2898 as a method for implementing password based cryptographic needs. These needs can include password storage, password derivation into a key (for encryption) or secure signatures. Additionally, it's NIST Recommended for password storage. Adding a core implementation of the PBKDF2 algorithm will enable PHP projects to utilize a fast implementation of the algorithm, putting them on a more level ground for attackers. Since the C implementation is more efficient, more rounds can be computed for the same computational cost compared to a PHP land implementation. This enables higher iteration counts to be used, providing more security with less impact to the overall performance of the application. Projects and Software That Currently Use PBKDF2 WPA and WPA2 for key derivation from password OpenDocument encryption (OpenOffice.org) WinZip AES encryption 1Password LastPass Apple iOS Blackberry Backup Encryption Django Python Framework Recommended Parameters For PBKDF2 $algo The way hash_pbkdf2 is written, any currently supported hash_algos() algorithm can be used as the base for the algorithm. This means that it's up to the developer to choose the appropriate algorithm to use when using the function. Here are a few of the popular algorithms and some recomendations around them. It should be noted that any cryptographic hash algorithm that's supported can be used successfully with PBKDF2 ( CRC32 is *not* cryptographic, therefore it should not be used). SHA512 - This is currently one of the strongest algorithms available in PHP. It makes a good primitive for *hash_pbkdf2* SHA256 - This is also plenty strong enough for use as the basis for PBKDF2. A note on other popular algorithms: SHA1 and MD5 - Both are actually strong enough for effective use in PBKDF2. The reason is that the known attack vectors against the algorithm require knowledge of the input string being hashed. Therefore, an iterated algorithm such as PBKDF2 will be immune to the known attack vectors. That means it's OK to use for this task. With that said, the recommended approach is to use SHA512 or SHA256 instead, as the base algorithms are stronger. But it's not necessarily *bad* to use SHA1 or MD5 . $salt The salt parameter should be a random string containing at least 64 bits of entropy. That means when generated from a function like *mcrypt_create_iv*, at least 8 bytes long. But for salts that consist of only *a-zA-Z0-9* (or are base_64 encoded), the minimum length should be at least 11 characters. It should be generated random for each password that's hashed, and stored along side the generated key. $iterations The iterations parameter provides the ability to *tune* the algorithm for different servers and needs. For most web uses, a minimum value of *1000* is recommended. However, as hardware varies greatly, testing should be done to find an iteration count that yields a function runtime of between 0.1 and 0.5 seconds (depending again on application). On higher end servers, this can be as much as 20,000 to 50,000 iterations (also depending on the hash algo used). It's better to use the highest iteration count possible, as it will only increase the resistance to brute forcing. $length The length parameter indicates the length of the returned key. The default value for length is the length of the hash algo's output. However, this can be increased or decreased as necessary. For example, if you're using PBKDF2 to generate a password-based key for use in an encryption routine such as RIJNDAEL 256, which expects a 256 bit key, you would want to pass the length parameter as 256/8 (to get the byte length), and set *$raw_output* to *true*. $raw_output This parameter behaves just like the other *hash_* functions. If set to *true*, the function will return a binary string (chr 0-255). If set to *false*, the function will hex encode the result prior to returning it. Example Let's say you wanted to encrypt a file using a password. The password shouldn't be applied directly to the encryption function, but should be derived first. encryption.php <?php $password = "foo" ; $data = "testing this out" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $key = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 16 , true ) ; // $key will be full-byte 0-255 data   $iv = mcrypt_create_iv ( mcrypt_get_iv_size ( MCRYPT_RIJNDAEL_128 , MCRYPT_MODE_CBC ) , MCRYPT_DEV_URANDOM ) ;   $ciphertext = mcrypt_encrypt ( MCRYPT_RIJNDAEL_128 , $key , $data , MCRYPT_MODE_CBC , $iv ) ; ?> Or for storing passwords (BCrypt is recommended, but there are use-cases for PBKDF2, such as when NIST compliance is mandated): password.php <?php $password = "foo" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $hash = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 32 ) ;   // $hash will be a hex encoded string ?> Proposal and Patch The proposal is to add a hash_pbkdf2() function to the hash extension in core. The proposed function has a signature: string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false]) The patch is available as a pull request to trunk. This RFC intends to add this functionality to master (5.5) only. Vote Vote begins on 2012/07/02 and ends on 2012/07/09. This vote is to include the new function in master only (5.5). rfc/hash_pbkdf2 Real name Yes? No? dragoonis   hradtke   ircmaxell   kriscraig   lynch   nikic   rasmus   shm   stas   Final result: 9 0 This poll has been closed. More about PBKDF2 RFC2898 WikiPedia NIST Recommendation - PDF A Reference Implementation In PHP Changelog 0.1 - Initial Version 0.2 - Proposed 0.3 - Added Parameter Information 0.4 - Reworded to target master only, removing 5.4 section 1.0 - Moving to Accepted state rfc/hash_pbkdf2.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show pagesource Old revisions Backlinks Back to top  Table of Contents Request for Comments: Adding hash_pbkdf2 Function Introduction Why do we need PBKDF2? Projects and Software That Currently Use PBKDF2 Recommended Parameters For PBKDF2 $algo $salt $iterations $length $raw_output Example Proposal and Patch Vote More about PBKDF2 Changelog Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/rfc/hash_pbkdf2?do=edit
PHP: rfc:hash_pbkdf2 Login Register You are here: start › rfc › hash_pbkdf2 rfc:hash_pbkdf2 This page is read only. You can view the source, but not change it. Ask your administrator if you think this is wrong. ====== Request for Comments: Adding hash_pbkdf2 Function ====== * Version: 1.0 * Date: 2012-06-13 * Author: Anthony Ferrara <ircmaxell@php.net> * Status: Implemented * First Published at: http://wiki.php.net/rfc/hash_pbkdf2 This RFC proposes adding a hash_pbkdf2 function to the hash package ===== Introduction ===== The purpose of this RFC is to add the PBKDF2 algorithm to the available hashing functions as a C implementation. ==== Why do we need PBKDF2? ==== PBKDF2 is defined in [[http://www.ietf.org/rfc/rfc2898.txt|RFC2898]] as a method for implementing password based cryptographic needs. These needs can include password storage, password derivation into a key (for encryption) or secure signatures. Additionally, it's [[http://csrc.nist.gov/publications/nistpubs/800-132/nist-sp800-132.pdf|NIST Recommended]] for password storage. Adding a core implementation of the PBKDF2 algorithm will enable PHP projects to utilize a fast implementation of the algorithm, putting them on a more level ground for attackers. Since the C implementation is more efficient, more rounds can be computed for the same computational cost compared to a PHP land implementation. This enables higher iteration counts to be used, providing more security with less impact to the overall performance of the application. ==== Projects and Software That Currently Use PBKDF2 ==== * WPA and WPA2 for key derivation from password * OpenDocument encryption (OpenOffice.org) * WinZip AES encryption * 1Password * LastPass * Apple iOS * Blackberry Backup Encryption * Django Python Framework ===== Recommended Parameters For PBKDF2 ===== ==== $algo ==== The way hash_pbkdf2 is written, any currently supported hash_algos() algorithm can be used as the base for the algorithm. This means that it's up to the developer to choose the appropriate algorithm to use when using the function. Here are a few of the popular algorithms and some recomendations around them. It should be noted that any cryptographic hash algorithm that's supported can be used successfully with PBKDF2 (**CRC32** is *not* cryptographic, therefore it should not be used). - **SHA512** - This is currently one of the strongest algorithms available in PHP. It makes a good primitive for *hash_pbkdf2* - **SHA256** - This is also plenty strong enough for use as the basis for PBKDF2. A note on other popular algorithms: **SHA1** and **MD5** - Both are actually strong enough for effective use in PBKDF2. The reason is that the known attack vectors against the algorithm require knowledge of the input string being hashed. Therefore, an iterated algorithm such as PBKDF2 will be immune to the known attack vectors. That means it's **OK** to use for this task. With that said, the recommended approach is to use **SHA512** or **SHA256** instead, as the base algorithms are stronger. But it's not necessarily *bad* to use **SHA1** or **MD5**. ==== $salt ==== The salt parameter should be a random string containing at least 64 bits of entropy. That means when generated from a function like *mcrypt_create_iv*, at least 8 bytes long. But for salts that consist of only *a-zA-Z0-9* (or are base_64 encoded), the minimum length should be at least 11 characters. It should be generated random for each password that's hashed, and stored along side the generated key. ==== $iterations ==== The iterations parameter provides the ability to *tune* the algorithm for different servers and needs. For most web uses, a minimum value of *1000* is recommended. However, as hardware varies greatly, testing should be done to find an iteration count that yields a function runtime of between 0.1 and 0.5 seconds (depending again on application). On higher end servers, this can be as much as 20,000 to 50,000 iterations (also depending on the hash algo used). It's better to use the highest iteration count possible, as it will only increase the resistance to brute forcing. ==== $length ==== The length parameter indicates the length of the returned key. The default value for length is the length of the hash algo's output. However, this can be increased or decreased as necessary. For example, if you're using PBKDF2 to generate a password-based key for use in an encryption routine such as RIJNDAEL 256, which expects a 256 bit key, you would want to pass the length parameter as 256/8 (to get the byte length), and set *$raw_output* to *true*. ==== $raw_output ==== This parameter behaves just like the other *hash_* functions. If set to *true*, the function will return a binary string (chr 0-255). If set to *false*, the function will hex encode the result prior to returning it. ===== Example ===== Let's say you wanted to encrypt a file using a password. The password shouldn't be applied directly to the encryption function, but should be derived first. <file php encryption.php> <?php $password = "foo"; $data = "testing this out"; $salt = mcrypt_create_iv(16, MCRYPT_DEV_URANDOM); $key = hash_pbkdf2("sha512", $password, $salt, 5000, 16, true); // $key will be full-byte 0-255 data $iv = mcrypt_create_iv(mcrypt_get_iv_size(MCRYPT_RIJNDAEL_128, MCRYPT_MODE_CBC), MCRYPT_DEV_URANDOM); $ciphertext = mcrypt_encrypt(MCRYPT_RIJNDAEL_128, $key, $data, MCRYPT_MODE_CBC, $iv); ?> </file> Or for storing passwords (BCrypt is recommended, but there are use-cases for PBKDF2, such as when NIST compliance is mandated): <file php password.php> <?php $password = "foo"; $salt = mcrypt_create_iv(16, MCRYPT_DEV_URANDOM); $hash = hash_pbkdf2("sha512", $password, $salt, 5000, 32); // $hash will be a hex encoded string ?> </file> ===== Proposal and Patch ===== The proposal is to add a hash_pbkdf2() function to the hash extension in core. The proposed function has a signature: ''string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false])'' The patch is available as a [[https://github.com/php/php-src/pull/105|pull request]] to trunk. This RFC intends to add this functionality to master (5.5) only. ===== Vote ===== Vote begins on 2012/07/02 and ends on 2012/07/09. This vote is to include the new function in master only (5.5). <doodle title="rfc/hash_pbkdf2" auth="user" voteType="multi" closed="True"> * Yes? * No? </doodle> ===== More about PBKDF2 ===== * [[http://www.ietf.org/rfc/rfc2898.txt|RFC2898]] * [[http://en.wikipedia.org/wiki/PBKDF2|WikiPedia]] * [[http://csrc.nist.gov/publications/nistpubs/800-132/nist-sp800-132.pdf|NIST Recommendation - PDF]] * [[https://github.com/ircmaxell/PHP-CryptLib/blob/master/lib/CryptLib/Key/Derivation/PBKDF/PBKDF2.php#L45|A Reference Implementation In PHP]] ===== Changelog ===== * 0.1 - Initial Version * 0.2 - Proposed * 0.3 - Added Parameter Information * 0.4 - Reworded to target master only, removing 5.4 section * 1.0 - Moving to Accepted state rfc/hash_pbkdf2.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://www.timeforkids.com/k1/#main-content
TIME for Kids | Articles | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Articles K-1 2 3-4 5-6 Animals Time to Eat! December 22, 2025 Animals have favorite foods. Rabbits eat plants. They eat grass and leaves. Foxes eat other animals. Some animals eat all kinds of things. Different animals have different diets. Learn about some of them here. Meat Eaters Lions are carnivores.… Audio Spanish Animals Who Eats What? December 22, 2025 Some animals eat only meat. Others eat only plants. Some eat both. How can we tell who eats what? We can use this chart. It is called a Venn diagram. The animals on the left are carnivores. Those on the… Audio Animals Animals Talk Too December 22, 2025 How do you share your thoughts? You might use words. You might use your hands. Animals do not speak the way we do. But they have lots of ways to communicate. Sounding Off This monkey has a loud voice.… Audio Spanish Animals How Animals Vote December 22, 2025 People can vote. Did you know that groups of animals can vote too? They vote on where to look for food. Or they vote on where to live. Meerkats Meerkats (above) search for food. They can vote to search faster.… Audio Animals Animal Defenses December 19, 2025 Animals protect themselves against predators. Predators are other animals that want to eat them. How do they protect themselves? They use their defenses. Find out more. What’s That Smell? Skunks have a stinky secret. They spray a bad odor.… Audio Spanish Animals Clever Colors December 19, 2025 Often, an animal’s coloring helps it survive. Some colors say, “Don’t mess with me.” Others help animals trap food. Here are some examples. Take a look. Warning Sign This frog (above) is bright orange. Its color sends a warning… Audio Animals Living in the Wild December 12, 2025 Some animals live in green forests. Others live in deserts. These places are called habitats. A habitat has everything an animal needs. It has shelter. It has food. And it is the right temperature for its animals. Grassland Habitat Lions… Audio Spanish Community Community Care November 5, 2025 It is important to stay clean. It keeps your body healthy. Personal care items help. Toilet paper is a personal care item. So are soap and toothpaste. Audrey Brown is 13. She helps people get these items. Collecting Things … Audio Spanish Community Take Care November 5, 2025 We all have health issues sometimes. Someone might get the flu. Someone might skin their knee. You can show that you care. A small act can make a big difference. It can cheer someone up when they feel down. It… Audio Community Protect the Planet October 31, 2025 Start a compost bin. Pick up litter. These are ways to help the environment. Everyone has a part to play. Every action matters. How will you keep the Earth clean and beautiful? Read a few ideas below. What others can… Audio Community Pals with Paws October 24, 2025 Hayden Roland is 12. He loves animals. He has six pets. But he takes care of many more. Hayden started a group called Wagging Tails. The group is almost two years old. It helps animals without homes. Animal Care Some… Audio Spanish Community Help Animals October 24, 2025 Animals are all around us. They need our care and attention. Some people volunteer at animal shelters. Others work to protect wildlife habitats. How can you help the creatures around you? Read some ideas below. Then imagine a few of… Audio Community Play Your Part October 17, 2025 You are part of many communities. Your neighborhood is one. Sports teams and clubs are communities too. How can you make your communities better? There are lots of ways. Read about a few. Keep places clean. Do you see… Audio Spanish Posts pagination 1 2 3 4 5 … 40 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/the-view/
TIME for Kids | The View | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit The View Business Money Matters November 15, 2017 Money pays for the things we need. Do you have money? Then you have choices about what you can do with it. You can save money. You can spend it. You can give it away. Earning Money Even kids can… Audio Spanish Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://wiki.php.net/gitstats_09_17?do=backlink
PHP: gitstats_09_17 - Backlinks Login Register You are here: start › gitstats_09_17 gitstats_09_17 Backlinks This is a list of pages that seem to link back to the current page. Nothing was found. gitstats_09_17.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Multi-panel-figures
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://wiki.php.net/rfc/hash_pbkdf2?do=#why_do_we_need_pbkdf2
PHP: rfc:hash_pbkdf2 Login Register You are here: start › rfc › hash_pbkdf2 rfc:hash_pbkdf2 Request for Comments: Adding hash_pbkdf2 Function Version: 1.0 Date: 2012-06-13 Author: Anthony Ferrara ircmaxell@php.net Status: Implemented First Published at: http://wiki.php.net/rfc/hash_pbkdf2 This RFC proposes adding a hash_pbkdf2 function to the hash package Introduction The purpose of this RFC is to add the PBKDF2 algorithm to the available hashing functions as a C implementation. Why do we need PBKDF2? PBKDF2 is defined in RFC2898 as a method for implementing password based cryptographic needs. These needs can include password storage, password derivation into a key (for encryption) or secure signatures. Additionally, it's NIST Recommended for password storage. Adding a core implementation of the PBKDF2 algorithm will enable PHP projects to utilize a fast implementation of the algorithm, putting them on a more level ground for attackers. Since the C implementation is more efficient, more rounds can be computed for the same computational cost compared to a PHP land implementation. This enables higher iteration counts to be used, providing more security with less impact to the overall performance of the application. Projects and Software That Currently Use PBKDF2 WPA and WPA2 for key derivation from password OpenDocument encryption (OpenOffice.org) WinZip AES encryption 1Password LastPass Apple iOS Blackberry Backup Encryption Django Python Framework Recommended Parameters For PBKDF2 $algo The way hash_pbkdf2 is written, any currently supported hash_algos() algorithm can be used as the base for the algorithm. This means that it's up to the developer to choose the appropriate algorithm to use when using the function. Here are a few of the popular algorithms and some recomendations around them. It should be noted that any cryptographic hash algorithm that's supported can be used successfully with PBKDF2 ( CRC32 is *not* cryptographic, therefore it should not be used). SHA512 - This is currently one of the strongest algorithms available in PHP. It makes a good primitive for *hash_pbkdf2* SHA256 - This is also plenty strong enough for use as the basis for PBKDF2. A note on other popular algorithms: SHA1 and MD5 - Both are actually strong enough for effective use in PBKDF2. The reason is that the known attack vectors against the algorithm require knowledge of the input string being hashed. Therefore, an iterated algorithm such as PBKDF2 will be immune to the known attack vectors. That means it's OK to use for this task. With that said, the recommended approach is to use SHA512 or SHA256 instead, as the base algorithms are stronger. But it's not necessarily *bad* to use SHA1 or MD5 . $salt The salt parameter should be a random string containing at least 64 bits of entropy. That means when generated from a function like *mcrypt_create_iv*, at least 8 bytes long. But for salts that consist of only *a-zA-Z0-9* (or are base_64 encoded), the minimum length should be at least 11 characters. It should be generated random for each password that's hashed, and stored along side the generated key. $iterations The iterations parameter provides the ability to *tune* the algorithm for different servers and needs. For most web uses, a minimum value of *1000* is recommended. However, as hardware varies greatly, testing should be done to find an iteration count that yields a function runtime of between 0.1 and 0.5 seconds (depending again on application). On higher end servers, this can be as much as 20,000 to 50,000 iterations (also depending on the hash algo used). It's better to use the highest iteration count possible, as it will only increase the resistance to brute forcing. $length The length parameter indicates the length of the returned key. The default value for length is the length of the hash algo's output. However, this can be increased or decreased as necessary. For example, if you're using PBKDF2 to generate a password-based key for use in an encryption routine such as RIJNDAEL 256, which expects a 256 bit key, you would want to pass the length parameter as 256/8 (to get the byte length), and set *$raw_output* to *true*. $raw_output This parameter behaves just like the other *hash_* functions. If set to *true*, the function will return a binary string (chr 0-255). If set to *false*, the function will hex encode the result prior to returning it. Example Let's say you wanted to encrypt a file using a password. The password shouldn't be applied directly to the encryption function, but should be derived first. encryption.php <?php $password = "foo" ; $data = "testing this out" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $key = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 16 , true ) ; // $key will be full-byte 0-255 data   $iv = mcrypt_create_iv ( mcrypt_get_iv_size ( MCRYPT_RIJNDAEL_128 , MCRYPT_MODE_CBC ) , MCRYPT_DEV_URANDOM ) ;   $ciphertext = mcrypt_encrypt ( MCRYPT_RIJNDAEL_128 , $key , $data , MCRYPT_MODE_CBC , $iv ) ; ?> Or for storing passwords (BCrypt is recommended, but there are use-cases for PBKDF2, such as when NIST compliance is mandated): password.php <?php $password = "foo" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $hash = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 32 ) ;   // $hash will be a hex encoded string ?> Proposal and Patch The proposal is to add a hash_pbkdf2() function to the hash extension in core. The proposed function has a signature: string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false]) The patch is available as a pull request to trunk. This RFC intends to add this functionality to master (5.5) only. Vote Vote begins on 2012/07/02 and ends on 2012/07/09. This vote is to include the new function in master only (5.5). rfc/hash_pbkdf2 Real name Yes? No? dragoonis   hradtke   ircmaxell   kriscraig   lynch   nikic   rasmus   shm   stas   Final result: 9 0 This poll has been closed. More about PBKDF2 RFC2898 WikiPedia NIST Recommendation - PDF A Reference Implementation In PHP Changelog 0.1 - Initial Version 0.2 - Proposed 0.3 - Added Parameter Information 0.4 - Reworded to target master only, removing 5.4 section 1.0 - Moving to Accepted state rfc/hash_pbkdf2.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show pagesource Old revisions Backlinks Back to top  Table of Contents Request for Comments: Adding hash_pbkdf2 Function Introduction Why do we need PBKDF2? Projects and Software That Currently Use PBKDF2 Recommended Parameters For PBKDF2 $algo $salt $iterations $length $raw_output Example Proposal and Patch Vote More about PBKDF2 Changelog Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/rfc/hash_pbkdf2?do=#projects_and_software_that_currently_use_pbkdf2
PHP: rfc:hash_pbkdf2 Login Register You are here: start › rfc › hash_pbkdf2 rfc:hash_pbkdf2 Request for Comments: Adding hash_pbkdf2 Function Version: 1.0 Date: 2012-06-13 Author: Anthony Ferrara ircmaxell@php.net Status: Implemented First Published at: http://wiki.php.net/rfc/hash_pbkdf2 This RFC proposes adding a hash_pbkdf2 function to the hash package Introduction The purpose of this RFC is to add the PBKDF2 algorithm to the available hashing functions as a C implementation. Why do we need PBKDF2? PBKDF2 is defined in RFC2898 as a method for implementing password based cryptographic needs. These needs can include password storage, password derivation into a key (for encryption) or secure signatures. Additionally, it's NIST Recommended for password storage. Adding a core implementation of the PBKDF2 algorithm will enable PHP projects to utilize a fast implementation of the algorithm, putting them on a more level ground for attackers. Since the C implementation is more efficient, more rounds can be computed for the same computational cost compared to a PHP land implementation. This enables higher iteration counts to be used, providing more security with less impact to the overall performance of the application. Projects and Software That Currently Use PBKDF2 WPA and WPA2 for key derivation from password OpenDocument encryption (OpenOffice.org) WinZip AES encryption 1Password LastPass Apple iOS Blackberry Backup Encryption Django Python Framework Recommended Parameters For PBKDF2 $algo The way hash_pbkdf2 is written, any currently supported hash_algos() algorithm can be used as the base for the algorithm. This means that it's up to the developer to choose the appropriate algorithm to use when using the function. Here are a few of the popular algorithms and some recomendations around them. It should be noted that any cryptographic hash algorithm that's supported can be used successfully with PBKDF2 ( CRC32 is *not* cryptographic, therefore it should not be used). SHA512 - This is currently one of the strongest algorithms available in PHP. It makes a good primitive for *hash_pbkdf2* SHA256 - This is also plenty strong enough for use as the basis for PBKDF2. A note on other popular algorithms: SHA1 and MD5 - Both are actually strong enough for effective use in PBKDF2. The reason is that the known attack vectors against the algorithm require knowledge of the input string being hashed. Therefore, an iterated algorithm such as PBKDF2 will be immune to the known attack vectors. That means it's OK to use for this task. With that said, the recommended approach is to use SHA512 or SHA256 instead, as the base algorithms are stronger. But it's not necessarily *bad* to use SHA1 or MD5 . $salt The salt parameter should be a random string containing at least 64 bits of entropy. That means when generated from a function like *mcrypt_create_iv*, at least 8 bytes long. But for salts that consist of only *a-zA-Z0-9* (or are base_64 encoded), the minimum length should be at least 11 characters. It should be generated random for each password that's hashed, and stored along side the generated key. $iterations The iterations parameter provides the ability to *tune* the algorithm for different servers and needs. For most web uses, a minimum value of *1000* is recommended. However, as hardware varies greatly, testing should be done to find an iteration count that yields a function runtime of between 0.1 and 0.5 seconds (depending again on application). On higher end servers, this can be as much as 20,000 to 50,000 iterations (also depending on the hash algo used). It's better to use the highest iteration count possible, as it will only increase the resistance to brute forcing. $length The length parameter indicates the length of the returned key. The default value for length is the length of the hash algo's output. However, this can be increased or decreased as necessary. For example, if you're using PBKDF2 to generate a password-based key for use in an encryption routine such as RIJNDAEL 256, which expects a 256 bit key, you would want to pass the length parameter as 256/8 (to get the byte length), and set *$raw_output* to *true*. $raw_output This parameter behaves just like the other *hash_* functions. If set to *true*, the function will return a binary string (chr 0-255). If set to *false*, the function will hex encode the result prior to returning it. Example Let's say you wanted to encrypt a file using a password. The password shouldn't be applied directly to the encryption function, but should be derived first. encryption.php <?php $password = "foo" ; $data = "testing this out" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $key = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 16 , true ) ; // $key will be full-byte 0-255 data   $iv = mcrypt_create_iv ( mcrypt_get_iv_size ( MCRYPT_RIJNDAEL_128 , MCRYPT_MODE_CBC ) , MCRYPT_DEV_URANDOM ) ;   $ciphertext = mcrypt_encrypt ( MCRYPT_RIJNDAEL_128 , $key , $data , MCRYPT_MODE_CBC , $iv ) ; ?> Or for storing passwords (BCrypt is recommended, but there are use-cases for PBKDF2, such as when NIST compliance is mandated): password.php <?php $password = "foo" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $hash = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 32 ) ;   // $hash will be a hex encoded string ?> Proposal and Patch The proposal is to add a hash_pbkdf2() function to the hash extension in core. The proposed function has a signature: string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false]) The patch is available as a pull request to trunk. This RFC intends to add this functionality to master (5.5) only. Vote Vote begins on 2012/07/02 and ends on 2012/07/09. This vote is to include the new function in master only (5.5). rfc/hash_pbkdf2 Real name Yes? No? dragoonis   hradtke   ircmaxell   kriscraig   lynch   nikic   rasmus   shm   stas   Final result: 9 0 This poll has been closed. More about PBKDF2 RFC2898 WikiPedia NIST Recommendation - PDF A Reference Implementation In PHP Changelog 0.1 - Initial Version 0.2 - Proposed 0.3 - Added Parameter Information 0.4 - Reworded to target master only, removing 5.4 section 1.0 - Moving to Accepted state rfc/hash_pbkdf2.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show pagesource Old revisions Backlinks Back to top  Table of Contents Request for Comments: Adding hash_pbkdf2 Function Introduction Why do we need PBKDF2? Projects and Software That Currently Use PBKDF2 Recommended Parameters For PBKDF2 $algo $salt $iterations $length $raw_output Example Proposal and Patch Vote More about PBKDF2 Changelog Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/php-gtk?do=index
PHP: Sitemap Login Register You are here: start › php-gtk php-gtk Sitemap This is a sitemap over all available pages ordered by namespaces . adopt-code-of-conduct doc gsoc ideas indication_of_interest internals issuetracker licenses notrfc p pear pecl php-gtk playground pplusplus qa release rfc summits systems sytems todo user usergroups vcs web wiki wiki.php.net canyouvote conferences corementorship cve doc email_etiquette_for_people_new_to_php_internals extensions-unmaintained gitstats_02_19 gitstats_09_17 gsoc ideas indication_of_interest internals issuetracker licenses pear pecl php-7.1-ideas php-gtk phpng-int phpng-upgrading phpng platforms playground qa redefine_constants_exception_strawpoll rfc-index rfc security_fixes security start summits svnmigration systems temporary_location_for_draft_documentation todo usergroups vcs voting web xfail_poll zts-improvement php-gtk.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/issuetracker?do=index
PHP: Sitemap Login Register You are here: start › issuetracker issuetracker Sitemap This is a sitemap over all available pages ordered by namespaces . adopt-code-of-conduct doc gsoc ideas indication_of_interest internals issuetracker licenses notrfc p pear pecl php-gtk playground pplusplus qa release rfc summits systems sytems todo user usergroups vcs web wiki wiki.php.net canyouvote conferences corementorship cve doc email_etiquette_for_people_new_to_php_internals extensions-unmaintained gitstats_02_19 gitstats_09_17 gsoc ideas indication_of_interest internals issuetracker licenses pear pecl php-7.1-ideas php-gtk phpng-int phpng-upgrading phpng platforms playground qa redefine_constants_exception_strawpoll rfc-index rfc security_fixes security start summits svnmigration systems temporary_location_for_draft_documentation todo usergroups vcs voting web xfail_poll zts-improvement issuetracker.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/gitstats_09_17?do=revisions
PHP: gitstats_09_17 Login Register You are here: start › gitstats_09_17 gitstats_09_17 Permission Denied Sorry, you don't have enough rights to continue. Login You are currently not logged in! Enter your authentication credentials below to log in. You need to have cookies enabled to log in. Log In Username Password Remember me Log In You don't have an account yet? Just get one: Register Forgotten your password? Get a new one: Set new password gitstats_09_17.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Contents
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://wiki.php.net/gitstats_09_17?do=index
PHP: Sitemap Login Register You are here: start › gitstats_09_17 gitstats_09_17 Sitemap This is a sitemap over all available pages ordered by namespaces . adopt-code-of-conduct doc gsoc ideas indication_of_interest internals issuetracker licenses notrfc p pear pecl php-gtk playground pplusplus qa release rfc summits systems sytems todo user usergroups vcs web wiki wiki.php.net canyouvote conferences corementorship cve doc email_etiquette_for_people_new_to_php_internals extensions-unmaintained gitstats_02_19 gitstats_09_17 gsoc ideas indication_of_interest internals issuetracker licenses pear pecl php-7.1-ideas php-gtk phpng-int phpng-upgrading phpng platforms playground qa redefine_constants_exception_strawpoll rfc-index rfc security_fixes security start summits svnmigration systems temporary_location_for_draft_documentation todo usergroups vcs voting web xfail_poll zts-improvement gitstats_09_17.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/indication_of_interest?do=index
PHP: Sitemap Login Register You are here: start › indication_of_interest indication_of_interest Sitemap This is a sitemap over all available pages ordered by namespaces . adopt-code-of-conduct doc gsoc ideas indication_of_interest internals issuetracker licenses notrfc p pear pecl php-gtk playground pplusplus qa release rfc summits systems sytems todo user usergroups vcs web wiki wiki.php.net canyouvote conferences corementorship cve doc email_etiquette_for_people_new_to_php_internals extensions-unmaintained gitstats_02_19 gitstats_09_17 gsoc ideas indication_of_interest internals issuetracker licenses pear pecl php-7.1-ideas php-gtk phpng-int phpng-upgrading phpng platforms playground qa redefine_constants_exception_strawpoll rfc-index rfc security_fixes security start summits svnmigration systems temporary_location_for_draft_documentation todo usergroups vcs voting web xfail_poll zts-improvement indication_of_interest.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://llvm.org/doxygen/classNode.html#a2f2a558b72c7765d1807306c5686a5ce
LLVM: Node Class Reference LLVM  22.0.0git Public Types | Public Member Functions | Protected Attributes | Friends | List of all members Node Class Reference abstract #include " llvm/Demangle/ItaniumDemangle.h " Inherited by FloatLiteralImpl< float > , FloatLiteralImpl< double > , FloatLiteralImpl< long double > , AbiTagAttr , ArraySubscriptExpr , ArrayType , BinaryExpr , BinaryFPType , BitIntType , BoolExpr , BracedExpr , BracedRangeExpr , CallExpr , CastExpr , ClosureTypeName , ConditionalExpr , ConstrainedTypeTemplateParamDecl , ConversionExpr , ConversionOperatorType , CtorDtorName , CtorVtableSpecialName , DeleteExpr , DotSuffix , DtorName , DynamicExceptionSpec , ElaboratedTypeSpefType , EnableIfAttr , EnclosingExpr , EnumLiteral , ExpandedSpecialSubstitution , ExplicitObjectParameter , ExprRequirement , FloatLiteralImpl< Float > , FoldExpr , ForwardTemplateReference , FunctionEncoding , FunctionParam , FunctionType , GlobalQualifiedName , InitListExpr , IntegerLiteral , LambdaExpr , LiteralOperator , LocalName , MemberExpr , MemberLikeFriendName , ModuleEntity , ModuleName , NameType , NameWithTemplateArgs , NestedName , NestedRequirement , NewExpr , NodeArrayNode , NoexceptSpec , NonTypeTemplateParamDecl , ObjCProtoName , ParameterPack , ParameterPackExpansion , PixelVectorType , PointerToMemberConversionExpr , PointerToMemberType , PointerType , PostfixExpr , PostfixQualifiedType , PrefixExpr , QualType , QualifiedName , ReferenceType , RequiresExpr , SizeofParamPackExpr , SpecialName , StringLiteral , StructuredBindingName , SubobjectExpr , SyntheticTemplateParamName , TemplateArgs , TemplateArgumentPack , TemplateParamPackDecl , TemplateParamQualifiedArg , TemplateTemplateParamDecl , ThrowExpr , TransformedType , TypeRequirement , TypeTemplateParamDecl , UnnamedTypeName , VectorType , and VendorExtQualType . Public Types enum   Kind : uint8_t enum class   Cache : uint8_t { Yes , No , Unknown }   Three-way bool to track a cached value. More... enum class   Prec : uint8_t {    Primary , Postfix , Unary , Cast ,    PtrMem , Multiplicative , Additive , Shift ,    Spaceship , Relational , Equality , And ,    Xor , Ior , AndIf , OrIf ,    Conditional , Assign , Comma , Default }   Operator precedence for expression nodes. More... Public Member Functions   Node ( Kind K_, Prec Precedence_= Prec::Primary , Cache RHSComponentCache_= Cache::No , Cache ArrayCache_= Cache::No , Cache FunctionCache_= Cache::No )   Node ( Kind K_, Cache RHSComponentCache_, Cache ArrayCache_= Cache::No , Cache FunctionCache_= Cache::No ) template<typename Fn> void  visit (Fn F ) const   Visit the most-derived object corresponding to this object. bool   hasRHSComponent ( OutputBuffer &OB) const bool   hasArray ( OutputBuffer &OB) const bool   hasFunction ( OutputBuffer &OB) const Kind   getKind () const Prec   getPrecedence () const Cache   getRHSComponentCache () const Cache   getArrayCache () const Cache   getFunctionCache () const virtual bool   hasRHSComponentSlow ( OutputBuffer &) const virtual bool   hasArraySlow ( OutputBuffer &) const virtual bool   hasFunctionSlow ( OutputBuffer &) const virtual const Node *  getSyntaxNode ( OutputBuffer &) const void  printAsOperand ( OutputBuffer &OB, Prec P = Prec::Default , bool StrictlyWorse=false) const void  print ( OutputBuffer &OB) const virtual bool   printInitListAsType ( OutputBuffer &, const NodeArray &) const virtual std::string_view  getBaseName () const virtual  ~Node ()=default DEMANGLE_DUMP_METHOD void  dump () const Protected Attributes Cache   RHSComponentCache : 2   Tracks if this node has a component on its right side, in which case we need to call printRight. Cache   ArrayCache : 2   Track if this node is a (possibly qualified) array type. Cache   FunctionCache : 2   Track if this node is a (possibly qualified) function type. Friends class  OutputBuffer Detailed Description Definition at line 166 of file ItaniumDemangle.h . Member Enumeration Documentation ◆  Cache enum class Node::Cache : uint8_t strong Three-way bool to track a cached value. Unknown is possible if this node has an unexpanded parameter pack below it that may affect this cache. Enumerator Yes  No  Unknown  Definition at line 175 of file ItaniumDemangle.h . ◆  Kind enum Node::Kind : uint8_t Definition at line 168 of file ItaniumDemangle.h . ◆  Prec enum class Node::Prec : uint8_t strong Operator precedence for expression nodes. Used to determine required parens in expression emission. Enumerator Primary  Postfix  Unary  Cast  PtrMem  Multiplicative  Additive  Shift  Spaceship  Relational  Equality  And  Xor  Ior  AndIf  OrIf  Conditional  Assign  Comma  Default  Definition at line 179 of file ItaniumDemangle.h . Constructor & Destructor Documentation ◆  Node() [1/2] Node::Node ( Kind K_ , Prec Precedence_ = Prec::Primary , Cache RHSComponentCache_ = Cache::No , Cache ArrayCache_ = Cache::No , Cache FunctionCache_ = Cache::No  ) inline Definition at line 221 of file ItaniumDemangle.h . References ArrayCache , FunctionCache , No , Primary , and RHSComponentCache . Referenced by AbiTagAttr::AbiTagAttr() , ArraySubscriptExpr::ArraySubscriptExpr() , ArrayType::ArrayType() , BinaryExpr::BinaryExpr() , BinaryFPType::BinaryFPType() , BitIntType::BitIntType() , BoolExpr::BoolExpr() , BracedExpr::BracedExpr() , BracedRangeExpr::BracedRangeExpr() , CallExpr::CallExpr() , CastExpr::CastExpr() , ClosureTypeName::ClosureTypeName() , ConditionalExpr::ConditionalExpr() , ConstrainedTypeTemplateParamDecl::ConstrainedTypeTemplateParamDecl() , ConversionExpr::ConversionExpr() , ConversionOperatorType::ConversionOperatorType() , CtorDtorName::CtorDtorName() , CtorVtableSpecialName::CtorVtableSpecialName() , DeleteExpr::DeleteExpr() , DotSuffix::DotSuffix() , DtorName::DtorName() , DynamicExceptionSpec::DynamicExceptionSpec() , ElaboratedTypeSpefType::ElaboratedTypeSpefType() , EnableIfAttr::EnableIfAttr() , EnclosingExpr::EnclosingExpr() , EnumLiteral::EnumLiteral() , ExpandedSpecialSubstitution::ExpandedSpecialSubstitution() , ExplicitObjectParameter::ExplicitObjectParameter() , ExprRequirement::ExprRequirement() , FloatLiteralImpl< float >::FloatLiteralImpl() , FoldExpr::FoldExpr() , ForwardTemplateReference::ForwardTemplateReference() , FunctionEncoding::FunctionEncoding() , FunctionParam::FunctionParam() , FunctionType::FunctionType() , TemplateParamQualifiedArg::getArg() , FunctionEncoding::getAttrs() , VectorType::getBaseType() , ParameterPackExpansion::getChild() , QualType::getChild() , VectorType::getDimension() , FunctionEncoding::getName() , PointerType::getPointee() , FunctionEncoding::getRequires() , FunctionEncoding::getReturnType() , ForwardTemplateReference::getSyntaxNode() , getSyntaxNode() , ParameterPack::getSyntaxNode() , VendorExtQualType::getTA() , VendorExtQualType::getTy() , GlobalQualifiedName::GlobalQualifiedName() , InitListExpr::InitListExpr() , IntegerLiteral::IntegerLiteral() , LambdaExpr::LambdaExpr() , LiteralOperator::LiteralOperator() , LocalName::LocalName() , MemberExpr::MemberExpr() , MemberLikeFriendName::MemberLikeFriendName() , ModuleEntity::ModuleEntity() , ModuleName::ModuleName() , NameType::NameType() , NameWithTemplateArgs::NameWithTemplateArgs() , NestedName::NestedName() , NestedRequirement::NestedRequirement() , NewExpr::NewExpr() , Node() , NodeArrayNode::NodeArrayNode() , NoexceptSpec::NoexceptSpec() , NonTypeTemplateParamDecl::NonTypeTemplateParamDecl() , ObjCProtoName::ObjCProtoName() , ParameterPack::ParameterPack() , ParameterPackExpansion::ParameterPackExpansion() , PixelVectorType::PixelVectorType() , PointerToMemberConversionExpr::PointerToMemberConversionExpr() , PointerToMemberType::PointerToMemberType() , PointerType::PointerType() , PostfixExpr::PostfixExpr() , PostfixQualifiedType::PostfixQualifiedType() , PrefixExpr::PrefixExpr() , RequiresExpr::printLeft() , QualifiedName::QualifiedName() , QualType::QualType() , ReferenceType::ReferenceType() , RequiresExpr::RequiresExpr() , SizeofParamPackExpr::SizeofParamPackExpr() , SpecialName::SpecialName() , StringLiteral::StringLiteral() , StructuredBindingName::StructuredBindingName() , SubobjectExpr::SubobjectExpr() , SyntheticTemplateParamName::SyntheticTemplateParamName() , TemplateArgs::TemplateArgs() , TemplateArgumentPack::TemplateArgumentPack() , TemplateParamPackDecl::TemplateParamPackDecl() , TemplateParamQualifiedArg::TemplateParamQualifiedArg() , TemplateTemplateParamDecl::TemplateTemplateParamDecl() , ThrowExpr::ThrowExpr() , TransformedType::TransformedType() , TypeRequirement::TypeRequirement() , TypeTemplateParamDecl::TypeTemplateParamDecl() , UnnamedTypeName::UnnamedTypeName() , VectorType::VectorType() , and VendorExtQualType::VendorExtQualType() . ◆  Node() [2/2] Node::Node ( Kind K_ , Cache RHSComponentCache_ , Cache ArrayCache_ = Cache::No , Cache FunctionCache_ = Cache::No  ) inline Definition at line 226 of file ItaniumDemangle.h . References No , Node() , and Primary . ◆  ~Node() virtual Node::~Node ( ) virtual default Member Function Documentation ◆  dump() DEMANGLE_DUMP_METHOD void Node::dump ( ) const References DEMANGLE_DUMP_METHOD . Referenced by llvm::DAGTypeLegalizer::run() , and llvm::RISCVDAGToDAGISel::Select() . ◆  getArrayCache() Cache Node::getArrayCache ( ) const inline Definition at line 262 of file ItaniumDemangle.h . References ArrayCache . Referenced by AbiTagAttr::AbiTagAttr() , and QualType::QualType() . ◆  getBaseName() virtual std::string_view Node::getBaseName ( ) const inline virtual Reimplemented in AbiTagAttr , ExpandedSpecialSubstitution , GlobalQualifiedName , MemberLikeFriendName , ModuleEntity , NameType , NameWithTemplateArgs , NestedName , QualifiedName , and SpecialSubstitution . Definition at line 299 of file ItaniumDemangle.h . ◆  getFunctionCache() Cache Node::getFunctionCache ( ) const inline Definition at line 263 of file ItaniumDemangle.h . References FunctionCache . Referenced by AbiTagAttr::AbiTagAttr() , and QualType::QualType() . ◆  getKind() Kind Node::getKind ( ) const inline Definition at line 258 of file ItaniumDemangle.h . Referenced by AbstractManglingParser< Derived, Alloc >::parseCtorDtorName() , AbstractManglingParser< Derived, Alloc >::parseNestedName() , AbstractManglingParser< Derived, Alloc >::parseTemplateArgs() , AbstractManglingParser< Derived, Alloc >::parseTemplateParam() , AbstractManglingParser< Derived, Alloc >::parseUnscopedName() , NodeArray::printAsString() , and llvm::msgpack::Document::writeToBlob() . ◆  getPrecedence() Prec Node::getPrecedence ( ) const inline Definition at line 260 of file ItaniumDemangle.h . Referenced by ArraySubscriptExpr::match() , BinaryExpr::match() , CallExpr::match() , CastExpr::match() , ConditionalExpr::match() , ConversionExpr::match() , DeleteExpr::match() , EnclosingExpr::match() , MemberExpr::match() , NewExpr::match() , PointerToMemberConversionExpr::match() , PostfixExpr::match() , PrefixExpr::match() , printAsOperand() , ArraySubscriptExpr::printLeft() , BinaryExpr::printLeft() , ConditionalExpr::printLeft() , MemberExpr::printLeft() , PostfixExpr::printLeft() , and PrefixExpr::printLeft() . ◆  getRHSComponentCache() Cache Node::getRHSComponentCache ( ) const inline Definition at line 261 of file ItaniumDemangle.h . References RHSComponentCache . Referenced by AbiTagAttr::AbiTagAttr() , PointerToMemberType::PointerToMemberType() , PointerType::PointerType() , QualType::QualType() , and ReferenceType::ReferenceType() . ◆  getSyntaxNode() virtual const Node * Node::getSyntaxNode ( OutputBuffer & ) const inline virtual Reimplemented in ForwardTemplateReference , and ParameterPack . Definition at line 271 of file ItaniumDemangle.h . References Node() , and OutputBuffer . ◆  hasArray() bool Node::hasArray ( OutputBuffer & OB ) const inline Definition at line 246 of file ItaniumDemangle.h . References ArrayCache , hasArraySlow() , OutputBuffer , Unknown , and Yes . ◆  hasArraySlow() virtual bool Node::hasArraySlow ( OutputBuffer & ) const inline virtual Reimplemented in ArrayType , ForwardTemplateReference , ParameterPack , and QualType . Definition at line 266 of file ItaniumDemangle.h . References OutputBuffer . Referenced by hasArray() . ◆  hasFunction() bool Node::hasFunction ( OutputBuffer & OB ) const inline Definition at line 252 of file ItaniumDemangle.h . References FunctionCache , hasFunctionSlow() , OutputBuffer , Unknown , and Yes . ◆  hasFunctionSlow() virtual bool Node::hasFunctionSlow ( OutputBuffer & ) const inline virtual Reimplemented in ForwardTemplateReference , FunctionEncoding , FunctionType , ParameterPack , and QualType . Definition at line 267 of file ItaniumDemangle.h . References OutputBuffer . Referenced by hasFunction() . ◆  hasRHSComponent() bool Node::hasRHSComponent ( OutputBuffer & OB ) const inline Definition at line 240 of file ItaniumDemangle.h . References hasRHSComponentSlow() , OutputBuffer , RHSComponentCache , Unknown , and Yes . ◆  hasRHSComponentSlow() virtual bool Node::hasRHSComponentSlow ( OutputBuffer & ) const inline virtual Reimplemented in ArrayType , ForwardTemplateReference , FunctionEncoding , FunctionType , ParameterPack , PointerToMemberType , PointerType , QualType , and ReferenceType . Definition at line 265 of file ItaniumDemangle.h . References OutputBuffer . Referenced by hasRHSComponent() . ◆  print() void Node::print ( OutputBuffer & OB ) const inline Definition at line 286 of file ItaniumDemangle.h . References No , OutputBuffer , and RHSComponentCache . Referenced by llvm::ItaniumPartialDemangler::getFunctionDeclContextName() , llvm::DOTGraphTraits< AADepGraph * >::getNodeLabel() , llvm::DOTGraphTraits< const MachineFunction * >::getNodeLabel() , llvm::itaniumDemangle() , printAsOperand() , FoldExpr::printLeft() , and printNode() . ◆  printAsOperand() void Node::printAsOperand ( OutputBuffer & OB , Prec P = Prec::Default , bool StrictlyWorse = false  ) const inline Definition at line 275 of file ItaniumDemangle.h . References Default , getPrecedence() , OutputBuffer , P , and print() . Referenced by llvm::DOTGraphTraits< DOTFuncInfo * >::getBBName() , getSimpleNodeName() , llvm::operator<<() , llvm::VPIRMetadata::print() , and llvm::SimpleNodeLabelString() . ◆  printInitListAsType() virtual bool Node::printInitListAsType ( OutputBuffer & , const NodeArray &  ) const inline virtual Reimplemented in ArrayType . Definition at line 295 of file ItaniumDemangle.h . References OutputBuffer . ◆  visit() template<typename Fn> void Node::visit ( Fn F ) const Visit the most-derived object corresponding to this object. Visit the node. Calls F(P) , where P is the node cast to the appropriate derived class. Definition at line 2639 of file ItaniumDemangle.h . References DEMANGLE_ASSERT , and F . Friends And Related Symbol Documentation ◆  OutputBuffer friend class OutputBuffer friend Definition at line 309 of file ItaniumDemangle.h . References OutputBuffer . Referenced by ForwardTemplateReference::getSyntaxNode() , getSyntaxNode() , ParameterPack::getSyntaxNode() , hasArray() , ArrayType::hasArraySlow() , ForwardTemplateReference::hasArraySlow() , hasArraySlow() , ParameterPack::hasArraySlow() , QualType::hasArraySlow() , hasFunction() , ForwardTemplateReference::hasFunctionSlow() , FunctionEncoding::hasFunctionSlow() , FunctionType::hasFunctionSlow() , hasFunctionSlow() , ParameterPack::hasFunctionSlow() , QualType::hasFunctionSlow() , hasRHSComponent() , ArrayType::hasRHSComponentSlow() , ForwardTemplateReference::hasRHSComponentSlow() , FunctionEncoding::hasRHSComponentSlow() , FunctionType::hasRHSComponentSlow() , hasRHSComponentSlow() , ParameterPack::hasRHSComponentSlow() , PointerToMemberType::hasRHSComponentSlow() , PointerType::hasRHSComponentSlow() , QualType::hasRHSComponentSlow() , ReferenceType::hasRHSComponentSlow() , OutputBuffer , print() , printAsOperand() , ClosureTypeName::printDeclarator() , ArrayType::printInitListAsType() , printInitListAsType() , AbiTagAttr::printLeft() , ArraySubscriptExpr::printLeft() , ArrayType::printLeft() , BinaryExpr::printLeft() , BinaryFPType::printLeft() , BitIntType::printLeft() , BoolExpr::printLeft() , BracedExpr::printLeft() , BracedRangeExpr::printLeft() , CallExpr::printLeft() , CastExpr::printLeft() , ClosureTypeName::printLeft() , ConditionalExpr::printLeft() , ConstrainedTypeTemplateParamDecl::printLeft() , ConversionExpr::printLeft() , ConversionOperatorType::printLeft() , CtorDtorName::printLeft() , CtorVtableSpecialName::printLeft() , DeleteExpr::printLeft() , DotSuffix::printLeft() , DtorName::printLeft() , DynamicExceptionSpec::printLeft() , ElaboratedTypeSpefType::printLeft() , EnableIfAttr::printLeft() , EnclosingExpr::printLeft() , EnumLiteral::printLeft() , ExplicitObjectParameter::printLeft() , ExprRequirement::printLeft() , FloatLiteralImpl< float >::printLeft() , FoldExpr::printLeft() , ForwardTemplateReference::printLeft() , FunctionEncoding::printLeft() , FunctionParam::printLeft() , FunctionType::printLeft() , GlobalQualifiedName::printLeft() , InitListExpr::printLeft() , IntegerLiteral::printLeft() , LambdaExpr::printLeft() , LiteralOperator::printLeft() , LocalName::printLeft() , MemberExpr::printLeft() , MemberLikeFriendName::printLeft() , ModuleEntity::printLeft() , ModuleName::printLeft() , NameType::printLeft() , NameWithTemplateArgs::printLeft() , NestedName::printLeft() , NestedRequirement::printLeft() , NewExpr::printLeft() , NodeArrayNode::printLeft() , NoexceptSpec::printLeft() , NonTypeTemplateParamDecl::printLeft() , ObjCProtoName::printLeft() , ParameterPack::printLeft() , ParameterPackExpansion::printLeft() , PixelVectorType::printLeft() , PointerToMemberConversionExpr::printLeft() , PointerToMemberType::printLeft() , PointerType::printLeft() , PostfixExpr::printLeft() , PostfixQualifiedType::printLeft() , PrefixExpr::printLeft() , QualifiedName::printLeft() , QualType::printLeft() , ReferenceType::printLeft() , RequiresExpr::printLeft() , SizeofParamPackExpr::printLeft() , SpecialName::printLeft() , SpecialSubstitution::printLeft() , StringLiteral::printLeft() , StructuredBindingName::printLeft() , SubobjectExpr::printLeft() , SyntheticTemplateParamName::printLeft() , TemplateArgs::printLeft() , TemplateArgumentPack::printLeft() , TemplateParamPackDecl::printLeft() , TemplateParamQualifiedArg::printLeft() , TemplateTemplateParamDecl::printLeft() , ThrowExpr::printLeft() , TransformedType::printLeft() , TypeRequirement::printLeft() , TypeTemplateParamDecl::printLeft() , UnnamedTypeName::printLeft() , VectorType::printLeft() , VendorExtQualType::printLeft() , QualType::printQuals() , ArrayType::printRight() , ConstrainedTypeTemplateParamDecl::printRight() , ForwardTemplateReference::printRight() , FunctionEncoding::printRight() , FunctionType::printRight() , NonTypeTemplateParamDecl::printRight() , ParameterPack::printRight() , PointerToMemberType::printRight() , PointerType::printRight() , QualType::printRight() , ReferenceType::printRight() , TemplateParamPackDecl::printRight() , TemplateTemplateParamDecl::printRight() , and TypeTemplateParamDecl::printRight() . Member Data Documentation ◆  ArrayCache Cache Node::ArrayCache protected Track if this node is a (possibly qualified) array type. This can affect how we format the output string. Definition at line 214 of file ItaniumDemangle.h . Referenced by getArrayCache() , hasArray() , Node() , and ParameterPack::ParameterPack() . ◆  FunctionCache Cache Node::FunctionCache protected Track if this node is a (possibly qualified) function type. This can affect how we format the output string. Definition at line 218 of file ItaniumDemangle.h . Referenced by getFunctionCache() , hasFunction() , Node() , and ParameterPack::ParameterPack() . ◆  RHSComponentCache Cache Node::RHSComponentCache protected Tracks if this node has a component on its right side, in which case we need to call printRight. Definition at line 210 of file ItaniumDemangle.h . Referenced by getRHSComponentCache() , hasRHSComponent() , Node() , ParameterPack::ParameterPack() , and print() . The documentation for this class was generated from the following file: include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#PIL:-the-Python-Imaging-Library
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://wiki.php.net/rfc/hash_pbkdf2?do=#introduction
PHP: rfc:hash_pbkdf2 Login Register You are here: start › rfc › hash_pbkdf2 rfc:hash_pbkdf2 Request for Comments: Adding hash_pbkdf2 Function Version: 1.0 Date: 2012-06-13 Author: Anthony Ferrara ircmaxell@php.net Status: Implemented First Published at: http://wiki.php.net/rfc/hash_pbkdf2 This RFC proposes adding a hash_pbkdf2 function to the hash package Introduction The purpose of this RFC is to add the PBKDF2 algorithm to the available hashing functions as a C implementation. Why do we need PBKDF2? PBKDF2 is defined in RFC2898 as a method for implementing password based cryptographic needs. These needs can include password storage, password derivation into a key (for encryption) or secure signatures. Additionally, it's NIST Recommended for password storage. Adding a core implementation of the PBKDF2 algorithm will enable PHP projects to utilize a fast implementation of the algorithm, putting them on a more level ground for attackers. Since the C implementation is more efficient, more rounds can be computed for the same computational cost compared to a PHP land implementation. This enables higher iteration counts to be used, providing more security with less impact to the overall performance of the application. Projects and Software That Currently Use PBKDF2 WPA and WPA2 for key derivation from password OpenDocument encryption (OpenOffice.org) WinZip AES encryption 1Password LastPass Apple iOS Blackberry Backup Encryption Django Python Framework Recommended Parameters For PBKDF2 $algo The way hash_pbkdf2 is written, any currently supported hash_algos() algorithm can be used as the base for the algorithm. This means that it's up to the developer to choose the appropriate algorithm to use when using the function. Here are a few of the popular algorithms and some recomendations around them. It should be noted that any cryptographic hash algorithm that's supported can be used successfully with PBKDF2 ( CRC32 is *not* cryptographic, therefore it should not be used). SHA512 - This is currently one of the strongest algorithms available in PHP. It makes a good primitive for *hash_pbkdf2* SHA256 - This is also plenty strong enough for use as the basis for PBKDF2. A note on other popular algorithms: SHA1 and MD5 - Both are actually strong enough for effective use in PBKDF2. The reason is that the known attack vectors against the algorithm require knowledge of the input string being hashed. Therefore, an iterated algorithm such as PBKDF2 will be immune to the known attack vectors. That means it's OK to use for this task. With that said, the recommended approach is to use SHA512 or SHA256 instead, as the base algorithms are stronger. But it's not necessarily *bad* to use SHA1 or MD5 . $salt The salt parameter should be a random string containing at least 64 bits of entropy. That means when generated from a function like *mcrypt_create_iv*, at least 8 bytes long. But for salts that consist of only *a-zA-Z0-9* (or are base_64 encoded), the minimum length should be at least 11 characters. It should be generated random for each password that's hashed, and stored along side the generated key. $iterations The iterations parameter provides the ability to *tune* the algorithm for different servers and needs. For most web uses, a minimum value of *1000* is recommended. However, as hardware varies greatly, testing should be done to find an iteration count that yields a function runtime of between 0.1 and 0.5 seconds (depending again on application). On higher end servers, this can be as much as 20,000 to 50,000 iterations (also depending on the hash algo used). It's better to use the highest iteration count possible, as it will only increase the resistance to brute forcing. $length The length parameter indicates the length of the returned key. The default value for length is the length of the hash algo's output. However, this can be increased or decreased as necessary. For example, if you're using PBKDF2 to generate a password-based key for use in an encryption routine such as RIJNDAEL 256, which expects a 256 bit key, you would want to pass the length parameter as 256/8 (to get the byte length), and set *$raw_output* to *true*. $raw_output This parameter behaves just like the other *hash_* functions. If set to *true*, the function will return a binary string (chr 0-255). If set to *false*, the function will hex encode the result prior to returning it. Example Let's say you wanted to encrypt a file using a password. The password shouldn't be applied directly to the encryption function, but should be derived first. encryption.php <?php $password = "foo" ; $data = "testing this out" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $key = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 16 , true ) ; // $key will be full-byte 0-255 data   $iv = mcrypt_create_iv ( mcrypt_get_iv_size ( MCRYPT_RIJNDAEL_128 , MCRYPT_MODE_CBC ) , MCRYPT_DEV_URANDOM ) ;   $ciphertext = mcrypt_encrypt ( MCRYPT_RIJNDAEL_128 , $key , $data , MCRYPT_MODE_CBC , $iv ) ; ?> Or for storing passwords (BCrypt is recommended, but there are use-cases for PBKDF2, such as when NIST compliance is mandated): password.php <?php $password = "foo" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $hash = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 32 ) ;   // $hash will be a hex encoded string ?> Proposal and Patch The proposal is to add a hash_pbkdf2() function to the hash extension in core. The proposed function has a signature: string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false]) The patch is available as a pull request to trunk. This RFC intends to add this functionality to master (5.5) only. Vote Vote begins on 2012/07/02 and ends on 2012/07/09. This vote is to include the new function in master only (5.5). rfc/hash_pbkdf2 Real name Yes? No? dragoonis   hradtke   ircmaxell   kriscraig   lynch   nikic   rasmus   shm   stas   Final result: 9 0 This poll has been closed. More about PBKDF2 RFC2898 WikiPedia NIST Recommendation - PDF A Reference Implementation In PHP Changelog 0.1 - Initial Version 0.2 - Proposed 0.3 - Added Parameter Information 0.4 - Reworded to target master only, removing 5.4 section 1.0 - Moving to Accepted state rfc/hash_pbkdf2.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show pagesource Old revisions Backlinks Back to top  Table of Contents Request for Comments: Adding hash_pbkdf2 Function Introduction Why do we need PBKDF2? Projects and Software That Currently Use PBKDF2 Recommended Parameters For PBKDF2 $algo $salt $iterations $length $raw_output Example Proposal and Patch Vote More about PBKDF2 Changelog Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://wiki.php.net/_export/xhtml/gitstats_09_17
gitstats_09_17 These are the stats per-author on the php-src tree. They were generated by the git-quick-stats package, slightly modified to apply the 25 commit / 500 additions/changes filter. Sorted by eligibility, insertions and commits (in that order, descending) ELIGIBLE 1) Anatol Belski insertions: 2161046 deletions: 861268 files: 12610 commits: 3858 lines changed: 3022314 2) Derick Rethans insertions: 1439572 deletions: 1122367 files: 4541 commits: 1926 lines changed: 2561939 3) Scott MacVicar insertions: 914165 deletions: 768184 files: 2008 commits: 605 lines changed: 1682349 4) Raghubansh Kumar insertions: 891070 deletions: 44345 files: 3294 commits: 402 lines changed: 935415 5) Dmitry Stogov insertions: 661905 deletions: 400211 files: 20988 commits: 5598 lines changed: 1062116 6) Ilia Alshanetsky insertions: 645148 deletions: 483272 files: 9191 commits: 4506 lines changed: 1128420 7) Stanislav Malyshev insertions: 484000 deletions: 307292 files: 7006 commits: 2284 lines changed: 791292 8) Remi Collet insertions: 380100 deletions: 2043043 files: 1233 commits: 627 lines changed: 2423143 9) <changelog@php.net> insertions: 370170 deletions: 34973 files: 5270 commits: 3802 lines changed: 405143 10) Ant Phillips insertions: 275483 deletions: 18586 files: 3196 commits: 101 lines changed: 294069 11) Wez Furlong insertions: 271886 deletions: 75956 files: 3927 commits: 1535 lines changed: 347842 12) andy wharmby insertions: 261787 deletions: 15280 files: 2682 commits: 326 lines changed: 277067 13) Antony Dovgal insertions: 233873 deletions: 92551 files: 9068 commits: 4103 lines changed: 326424 14) Marcus Boerger insertions: 229926 deletions: 104171 files: 10725 commits: 4524 lines changed: 334097 15) Xinchen Hui insertions: 221545 deletions: 166952 files: 14763 commits: 2661 lines changed: 388497 16) Nuno Lopes insertions: 214358 deletions: 246536 files: 2292 commits: 617 lines changed: 460894 17) Rui Hirokawa insertions: 210695 deletions: 94462 files: 1378 commits: 369 lines changed: 305157 18) Andrei Zmievski insertions: 197896 deletions: 125393 files: 2805 commits: 1337 lines changed: 323289 19) Pierre Joye insertions: 196516 deletions: 171610 files: 5826 commits: 3330 lines changed: 368126 20) foobar <sniper@php.net> insertions: 170207 deletions: 429355 files: 7739 commits: 2754 lines changed: 599562 21) Zeev Suraski insertions: 167281 deletions: 105421 files: 7641 commits: 2179 lines changed: 272702 22) Felipe Pena insertions: 164503 deletions: 131563 files: 14450 commits: 3063 lines changed: 296066 23) Andrey Hristov insertions: 160615 deletions: 63352 files: 5057 commits: 1746 lines changed: 223967 24) Sterling Hughes insertions: 157453 deletions: 71809 files: 1289 commits: 668 lines changed: 229262 25) Zoe Slattery insertions: 148458 deletions: 25503 files: 2391 commits: 321 lines changed: 173961 26) Nikita Popov insertions: 146277 deletions: 117970 files: 6757 commits: 1621 lines changed: 264247 27) Stig Bakken insertions: 144660 deletions: 104701 files: 2235 commits: 838 lines changed: 249361 28) Josie Messa insertions: 135238 deletions: 761 files: 1055 commits: 97 lines changed: 135999 29) Andi Gutmans insertions: 127395 deletions: 53779 files: 4662 commits: 1681 lines changed: 181174 30) Lior Kaplan insertions: 126711 deletions: 94888 files: 1168 commits: 209 lines changed: 221599 31) Christoph M. insertions: 120556 deletions: 61692 files: 1230 commits: 471 lines changed: 182248 32) Ferenc Kovacs insertions: 117706 deletions: 112724 files: 686 commits: 381 lines changed: 230430 33) Greg Beaver insertions: 112710 deletions: 66487 files: 3963 commits: 1348 lines changed: 179197 34) Jani Taskinen insertions: 110352 deletions: 127842 files: 3765 commits: 1535 lines changed: 238194 35) Moriyoshi Koizumi insertions: 108070 deletions: 46444 files: 2379 commits: 1040 lines changed: 154514 36) Gustavo André insertions: 104425 deletions: 45917 files: 1838 commits: 444 lines changed: 150342 37) Christopher Jones insertions: 102837 deletions: 32538 files: 3272 commits: 783 lines changed: 135375 38) Ulf Wendel insertions: 96771 deletions: 21241 files: 4367 commits: 601 lines changed: 118012 39) Bob Weinand insertions: 92904 deletions: 54012 files: 2295 commits: 759 lines changed: 146916 40) Robin Fernandes insertions: 80860 deletions: 594 files: 989 commits: 38 lines changed: 81454 41) Adam Harvey insertions: 79227 deletions: 48102 files: 765 commits: 260 lines changed: 127329 42) Sascha Schumann insertions: 74259 deletions: 75690 files: 5646 commits: 2171 lines changed: 149949 43) Felix De insertions: 68238 deletions: 8 files: 94 commits: 30 lines changed: 68246 44) Robert Nicholson insertions: 60814 deletions: 136 files: 537 commits: 57 lines changed: 60950 45) Rasmus Lerdorf insertions: 60672 deletions: 16099 files: 2623 commits: 1250 lines changed: 76771 46) Rob Richards insertions: 52909 deletions: 9082 files: 1696 commits: 754 lines changed: 61991 47) Anatoliy Belsky insertions: 51737 deletions: 30819 files: 510 commits: 186 lines changed: 82556 48) Steph Fox insertions: 49944 deletions: 304199 files: 4377 commits: 222 lines changed: 354143 49) Johannes Schlüter insertions: 47885 deletions: 125066 files: 2718 commits: 732 lines changed: 172951 50) David Soria insertions: 44386 deletions: 2611 files: 517 commits: 238 lines changed: 46997 51) krakjoe <joe.watkins@live.co.uk> insertions: 44163 deletions: 21379 files: 1569 commits: 706 lines changed: 65542 52) Sanjay Mantoor insertions: 44039 deletions: 34 files: 358 commits: 35 lines changed: 44073 53) Steve Seear insertions: 36668 deletions: 751 files: 375 commits: 48 lines changed: 37419 54) Hartmut Holzgraefe insertions: 36214 deletions: 22360 files: 1224 commits: 583 lines changed: 58574 55) Thies C. insertions: 34769 deletions: 25924 files: 1030 commits: 629 lines changed: 60693 56) Sara Golemon insertions: 34445 deletions: 12891 files: 1375 commits: 647 lines changed: 47336 57) Sebastian Bergmann insertions: 33414 deletions: 34753 files: 11816 commits: 610 lines changed: 68167 58) Hannes Magnusson insertions: 33043 deletions: 14162 files: 1778 commits: 823 lines changed: 47205 59) Arnaud Le insertions: 28436 deletions: 7732 files: 1180 commits: 424 lines changed: 36168 60) Frank M. insertions: 28085 deletions: 14338 files: 747 commits: 519 lines changed: 42423 61) Shane Caraveo insertions: 27617 deletions: 2645 files: 312 commits: 142 lines changed: 30262 62) Yasuo Ohgaki insertions: 26710 deletions: 8395 files: 1483 commits: 664 lines changed: 35105 63) Dave Kelsey insertions: 26588 deletions: 468 files: 562 commits: 27 lines changed: 27056 64) Harald Radi insertions: 25896 deletions: 20728 files: 613 commits: 219 lines changed: 46624 65) Etienne Kneuss insertions: 25404 deletions: 7855 files: 861 commits: 277 lines changed: 33259 66) Andrea Faulds insertions: 25282 deletions: 20083 files: 1506 commits: 170 lines changed: 45365 67) Jérôme Loyet insertions: 25268 deletions: 11451 files: 1005 commits: 260 lines changed: 36719 68) Michael Wallner insertions: 23653 deletions: 12396 files: 1223 commits: 505 lines changed: 36049 69) George Schlossnagle insertions: 23345 deletions: 1885 files: 454 commits: 88 lines changed: 25230 70) Uwe Steinmann insertions: 22281 deletions: 6173 files: 343 commits: 204 lines changed: 28454 71) George Wang insertions: 20130 deletions: 6447 files: 168 commits: 116 lines changed: 26577 72) Georg Richter insertions: 19534 deletions: 6410 files: 859 commits: 271 lines changed: 25944 73) Daniel Convissor insertions: 19211 deletions: 6572 files: 485 commits: 57 lines changed: 25783 74) Jakub Zelenka insertions: 17220 deletions: 11269 files: 575 commits: 268 lines changed: 28489 75) Brian Shire insertions: 16935 deletions: 16184 files: 78 commits: 39 lines changed: 33119 76) Zak Greant insertions: 16701 deletions: 972 files: 86 commits: 29 lines changed: 17673 77) Kalle Sommer insertions: 16485 deletions: 32191 files: 1876 commits: 685 lines changed: 48676 78) Ard Biesheuvel insertions: 16259 deletions: 10992 files: 441 commits: 253 lines changed: 27251 79) Patrick Allaert insertions: 15822 deletions: 443 files: 366 commits: 53 lines changed: 16265 80) Matt Wilmas insertions: 14520 deletions: 17013 files: 366 commits: 117 lines changed: 31533 81) Stefan Marr insertions: 12950 deletions: 5280 files: 320 commits: 74 lines changed: 18230 82) Peter Cowburn insertions: 12102 deletions: 11586 files: 66 commits: 28 lines changed: 23688 83) Pierrick Charron insertions: 11155 deletions: 8124 files: 469 commits: 221 lines changed: 19279 84) Daniel Beulshausen insertions: 10145 deletions: 7710 files: 389 commits: 220 lines changed: 17855 85) Edin Kadribasic insertions: 10116 deletions: 4695 files: 691 commits: 460 lines changed: 14811 86) Anthony Ferrara insertions: 9454 deletions: 7550 files: 522 commits: 115 lines changed: 17004 87) John Coggeshall insertions: 9264 deletions: 5506 files: 172 commits: 62 lines changed: 14770 88) Christian Stocker insertions: 8993 deletions: 3471 files: 436 commits: 255 lines changed: 12464 89) Arpad Ray insertions: 8302 deletions: 631 files: 239 commits: 49 lines changed: 8933 90) Matteo Beccati insertions: 7867 deletions: 2812 files: 439 commits: 158 lines changed: 10679 91) Marc Boeren insertions: 7860 deletions: 3099 files: 246 commits: 60 lines changed: 10959 92) Daniel Lowrey insertions: 7802 deletions: 3051 files: 288 commits: 114 lines changed: 10853 93) Lars Strojny insertions: 7790 deletions: 701 files: 267 commits: 94 lines changed: 8491 94) Seiji Masugata insertions: 7695 deletions: 1430 files: 145 commits: 42 lines changed: 9125 95) Sam Ruby insertions: 7561 deletions: 1266 files: 333 commits: 141 lines changed: 8827 96) Jouni Ahto insertions: 7513 deletions: 3744 files: 156 commits: 90 lines changed: 11257 97) Tjerk Meesters insertions: 7418 deletions: 35964 files: 452 commits: 125 lines changed: 43382 98) Gustavo Lopes insertions: 7360 deletions: 3265 files: 235 commits: 111 lines changed: 10625 99) Mikko Koppanen insertions: 6793 deletions: 1535 files: 86 commits: 39 lines changed: 8328 100) Boris Lytochkin insertions: 6725 deletions: 2327 files: 251 commits: 96 lines changed: 9052 101) Aaron Piotrowski insertions: 6412 deletions: 4922 files: 1022 commits: 79 lines changed: 11334 102) Tomas V.V.Cox insertions: 6261 deletions: 3817 files: 401 commits: 343 lines changed: 10078 103) Markus Fischer insertions: 5956 deletions: 4138 files: 194 commits: 163 lines changed: 10094 104) Timm Friebe insertions: 5948 deletions: 2511 files: 227 commits: 142 lines changed: 8459 105) Anantha Kesari insertions: 5914 deletions: 6069 files: 294 commits: 185 lines changed: 11983 106) Jeroen van insertions: 5652 deletions: 4866 files: 309 commits: 71 lines changed: 10518 107) David Eriksson insertions: 5350 deletions: 5354 files: 140 commits: 26 lines changed: 10704 108) Colin Viebrock insertions: 5239 deletions: 4797 files: 127 commits: 62 lines changed: 10036 109) Sergey Kartashoff insertions: 5035 deletions: 1103 files: 156 commits: 102 lines changed: 6138 110) Stefan Esser insertions: 4983 deletions: 1762 files: 212 commits: 142 lines changed: 6745 111) Adam Dickmeiss insertions: 4881 deletions: 2708 files: 81 commits: 61 lines changed: 7589 112) James Moore insertions: 4855 deletions: 1258 files: 75 commits: 39 lines changed: 6113 113) Levi Morrison insertions: 4784 deletions: 2968 files: 289 commits: 30 lines changed: 7752 114) Andrew Skalski insertions: 4602 deletions: 2316 files: 58 commits: 26 lines changed: 6918 115) Mark L. insertions: 4529 deletions: 1860 files: 69 commits: 34 lines changed: 6389 116) Mark Musone insertions: 4177 deletions: 710 files: 40 commits: 26 lines changed: 4887 117) Andreas Karajannis insertions: 3993 deletions: 3380 files: 58 commits: 30 lines changed: 7373 118) Chuck Hagenbuch insertions: 3922 deletions: 2887 files: 133 commits: 116 lines changed: 6809 119) Joe Watkins insertions: 3802 deletions: 1013 files: 363 commits: 209 lines changed: 4815 120) Stig Venaas insertions: 3504 deletions: 1252 files: 142 commits: 104 lines changed: 4756 121) marcosptf <marcosptf@yahoo.com.br> insertions: 3450 deletions: 257 files: 99 commits: 94 lines changed: 3707 122) James Cox insertions: 3372 deletions: 4665 files: 785 commits: 68 lines changed: 8037 123) Márcio Almada insertions: 3273 deletions: 496 files: 117 commits: 35 lines changed: 3769 124) David Croft insertions: 3238 deletions: 2118 files: 166 commits: 40 lines changed: 5356 125) Davey Shafik insertions: 3100 deletions: 1271 files: 76 commits: 36 lines changed: 4371 126) David Hedbor insertions: 3089 deletions: 1287 files: 74 commits: 56 lines changed: 4376 127) Jason Greene insertions: 3041 deletions: 1800 files: 180 commits: 105 lines changed: 4841 128) Preston L. insertions: 2912 deletions: 2772 files: 40 commits: 26 lines changed: 5684 129) Sander Roobol insertions: 2879 deletions: 17089 files: 166 commits: 83 lines changed: 19968 130) David Coallier insertions: 2753 deletions: 1627 files: 91 commits: 66 lines changed: 4380 131) Jon Parise insertions: 2697 deletions: 856 files: 136 commits: 115 lines changed: 3553 132) Joey Smith insertions: 2667 deletions: 2031 files: 117 commits: 94 lines changed: 4698 133) Julien Pauli insertions: 2275 deletions: 1646 files: 371 commits: 229 lines changed: 3921 134) Danack <Danack@basereality.com> insertions: 2258 deletions: 996 files: 91 commits: 28 lines changed: 3254 135) Alexey Zakhlestin insertions: 2240 deletions: 3374 files: 203 commits: 62 lines changed: 5614 136) Adam Baratz insertions: 2212 deletions: 912 files: 154 commits: 70 lines changed: 3124 137) Jerome Loyet insertions: 2176 deletions: 465 files: 171 commits: 57 lines changed: 2641 138) Danny Heijl insertions: 2078 deletions: 1695 files: 66 commits: 48 lines changed: 3773 139) Rolland Santimano insertions: 2019 deletions: 686 files: 41 commits: 36 lines changed: 2705 140) Reeze Xia insertions: 2009 deletions: 1351 files: 204 commits: 77 lines changed: 3360 141) Dan Kalowsky insertions: 1957 deletions: 1197 files: 196 commits: 154 lines changed: 3154 142) Keyur Govande insertions: 1858 deletions: 653 files: 108 commits: 37 lines changed: 2511 143) Melvyn Sopacua insertions: 1848 deletions: 714 files: 126 commits: 80 lines changed: 2562 144) Uwe Schindler insertions: 1832 deletions: 1067 files: 131 commits: 112 lines changed: 2899 145) Egon Schmid insertions: 1814 deletions: 2078 files: 218 commits: 174 lines changed: 3892 146) andrey <andrey@php.net> insertions: 1718 deletions: 3432 files: 101 commits: 28 lines changed: 5150 147) Takeshi Abe insertions: 1671 deletions: 275 files: 117 commits: 59 lines changed: 1946 148) Masaki Kagaya insertions: 1667 deletions: 844 files: 69 commits: 52 lines changed: 2511 149) Rafael Machado insertions: 1647 deletions: 112 files: 81 commits: 26 lines changed: 1759 150) Matt Ficken insertions: 1593 deletions: 230 files: 63 commits: 25 lines changed: 1823 151) Stig S. insertions: 1546 deletions: 1833 files: 85 commits: 35 lines changed: 3379 152) Leigh <leight@gmail.com> insertions: 1487 deletions: 150 files: 59 commits: 26 lines changed: 1637 153) Martin Jansen insertions: 1474 deletions: 368 files: 134 commits: 89 lines changed: 1842 154) Ben Mansell insertions: 1408 deletions: 337 files: 51 commits: 39 lines changed: 1745 155) Evan Klinger insertions: 1378 deletions: 630 files: 53 commits: 32 lines changed: 2008 156) Vlad Krupin insertions: 1340 deletions: 178 files: 39 commits: 29 lines changed: 1518 157) Frank Denis insertions: 1281 deletions: 679 files: 57 commits: 37 lines changed: 1960 158) Philip Olson insertions: 1277 deletions: 2184 files: 66 commits: 51 lines changed: 3461 159) Veres Lajos insertions: 1270 deletions: 1259 files: 750 commits: 29 lines changed: 2529 160) Lukas Smith insertions: 1266 deletions: 3854 files: 69 commits: 56 lines changed: 5120 161) Jan Lehnardt insertions: 1229 deletions: 2250 files: 72 commits: 55 lines changed: 3479 162) Leigh <leigh@php.net> insertions: 1138 deletions: 7771 files: 158 commits: 34 lines changed: 8909 163) Stanley Sufficool insertions: 1050 deletions: 720 files: 57 commits: 33 lines changed: 1770 164) Raphael Geissert insertions: 1044 deletions: 119 files: 75 commits: 43 lines changed: 1163 165) Sriram Natarajan insertions: 1013 deletions: 125 files: 106 commits: 51 lines changed: 1138 166) Gwynne Raskind insertions: 948 deletions: 904 files: 119 commits: 78 lines changed: 1852 167) Mitch Hagstrand insertions: 947 deletions: 611 files: 89 commits: 38 lines changed: 1558 168) jim winstead insertions: 921 deletions: 9221 files: 130 commits: 37 lines changed: 10142 169) Sean Bright insertions: 909 deletions: 370 files: 59 commits: 46 lines changed: 1279 170) Marko Karppinen insertions: 807 deletions: 640 files: 52 commits: 32 lines changed: 1447 171) Stefan Roehrich insertions: 776 deletions: 715 files: 54 commits: 33 lines changed: 1491 172) Popa Adrian insertions: 757 deletions: 296 files: 117 commits: 79 lines changed: 1053 173) Shein Alexey insertions: 647 deletions: 4088 files: 90 commits: 40 lines changed: 4735 174) Elizabeth Marie insertions: 629 deletions: 177 files: 55 commits: 28 lines changed: 806 175) SVN Migration insertions: 175615 deletions: 1719680 files: 10027 commits: 11 lines changed: 1895295 INELIGIBLE 176) MySQL Team insertions: 34875 deletions: 7888 files: 306 commits: 16 lines changed: 42763 177) Dan Libby insertions: 22285 deletions: 444 files: 125 commits: 5 lines changed: 22729 178) Florian MARGAINE insertions: 14454 deletions: 14348 files: 250 commits: 14 lines changed: 28802 179) Omar Kilani insertions: 8188 deletions: 4016 files: 93 commits: 12 lines changed: 12204 180) datibbaw <datibbaw@php.net> insertions: 7778 deletions: 6945 files: 54 commits: 15 lines changed: 14723 181) Arjen Schol insertions: 6548 deletions: 4583 files: 57 commits: 1 lines changed: 11131 182) Christian Schneider insertions: 6379 deletions: 187 files: 27 commits: 1 lines changed: 6566 183) Joseph Tate insertions: 5841 deletions: 4957 files: 33 commits: 19 lines changed: 10798 184) David Carlier insertions: 5087 deletions: 4089 files: 38 commits: 8 lines changed: 9176 185) Iain Lewis insertions: 4860 deletions: 0 files: 57 commits: 6 lines changed: 4860 186) Charles R. insertions: 4242 deletions: 3876 files: 106 commits: 19 lines changed: 8118 187) Sam Liddicott insertions: 3910 deletions: 2861 files: 16 commits: 4 lines changed: 6771 188) Paragon Initiative insertions: 3848 deletions: 0 files: 30 commits: 1 lines changed: 3848 189) Dan Helfman insertions: 3783 deletions: 3783 files: 18 commits: 2 lines changed: 7566 190) Dave Hayden insertions: 3613 deletions: 256 files: 9 commits: 5 lines changed: 3869 191) Brad House insertions: 3584 deletions: 251 files: 36 commits: 18 lines changed: 3835 192) Sammy Kaye insertions: 3452 deletions: 3079 files: 2835 commits: 18 lines changed: 6531 193) Henrique do insertions: 3350 deletions: 445 files: 99 commits: 20 lines changed: 3795 194) Magnus M��tt� insertions: 3054 deletions: 69 files: 91 commits: 24 lines changed: 3123 195) Nikos Mavroyanopoulos insertions: 2654 deletions: 1135 files: 22 commits: 14 lines changed: 3789 196) Chris Vandomelen insertions: 2618 deletions: 305 files: 22 commits: 14 lines changed: 2923 197) Christian Seiler insertions: 2566 deletions: 1172 files: 77 commits: 18 lines changed: 3738 198) Michele Locati insertions: 2354 deletions: 20 files: 29 commits: 1 lines changed: 2374 199) Brad Broerman insertions: 2302 deletions: 2076 files: 14 commits: 14 lines changed: 4378 200) Dan Scott insertions: 2144 deletions: 24 files: 46 commits: 18 lines changed: 2168 201) Olivier DOUCET insertions: 2121 deletions: 45 files: 88 commits: 11 lines changed: 2166 202) Chris Wright insertions: 2045 deletions: 1032 files: 63 commits: 11 lines changed: 3077 203) Venkat Raghavan insertions: 1980 deletions: 125 files: 36 commits: 12 lines changed: 2105 204) Holger Zimmermann insertions: 1975 deletions: 210 files: 29 commits: 19 lines changed: 2185 205) Yiduo (David) insertions: 1926 deletions: 1801 files: 171 commits: 5 lines changed: 3727 206) Brian Evans insertions: 1910 deletions: 1908 files: 22 commits: 1 lines changed: 3818 207) Eric Stewart insertions: 1825 deletions: 30 files: 100 commits: 18 lines changed: 1855 208) Francois Laupretre insertions: 1794 deletions: 892 files: 61 commits: 15 lines changed: 2686 209) Masaki Fujimoto insertions: 1704 deletions: 57 files: 17 commits: 4 lines changed: 1761 210) William Martin insertions: 1701 deletions: 0 files: 88 commits: 9 lines changed: 1701 211) Hénot David insertions: 1682 deletions: 39 files: 27 commits: 8 lines changed: 1721 212) Ian Holsman insertions: 1666 deletions: 10 files: 12 commits: 4 lines changed: 1676 213) Pedro Magalhães insertions: 1557 deletions: 1128 files: 399 commits: 14 lines changed: 2685 214) Clayton Collie insertions: 1555 deletions: 3 files: 10 commits: 3 lines changed: 1558 215) Christopher Kings-Lynne insertions: 1549 deletions: 241 files: 35 commits: 9 lines changed: 1790 216) Brad LaFountain insertions: 1507 deletions: 815 files: 31 commits: 11 lines changed: 2322 217) Mark Karpeles insertions: 1484 deletions: 952 files: 25 commits: 7 lines changed: 2436 218) Alan Knowles insertions: 1452 deletions: 478 files: 29 commits: 24 lines changed: 1930 219) Corne' Cornelius insertions: 1433 deletions: 1394 files: 16 commits: 13 lines changed: 2827 220) Knut Urdalen insertions: 1415 deletions: 227 files: 71 commits: 11 lines changed: 1642 221) ULF WENDEL insertions: 1410 deletions: 167 files: 22 commits: 9 lines changed: 1577 222) Antonio Diaz insertions: 1364 deletions: 20 files: 76 commits: 10 lines changed: 1384 223) Boian Bonev insertions: 1314 deletions: 421 files: 30 commits: 23 lines changed: 1735 224) Rodrigo Prado insertions: 1300 deletions: 3 files: 38 commits: 6 lines changed: 1303 225) Michael Maclean insertions: 1293 deletions: 238 files: 22 commits: 6 lines changed: 1531 226) Côme Chilliet insertions: 1256 deletions: 495 files: 49 commits: 24 lines changed: 1751 227) Terry Ellison insertions: 1197 deletions: 937 files: 20 commits: 3 lines changed: 2134 228) Ben Ramsey insertions: 1193 deletions: 665 files: 28 commits: 15 lines changed: 1858 229) Nick Gorham insertions: 1110 deletions: 125 files: 21 commits: 7 lines changed: 1235 230) Côme Bernigaud insertions: 1033 deletions: 1063 files: 152 commits: 23 lines changed: 2096 231) Dominic Luechinger insertions: 993 deletions: 121 files: 20 commits: 9 lines changed: 1114 232) Vincent Blavet insertions: 967 deletions: 326 files: 18 commits: 16 lines changed: 1293 233) Igor Wiedler insertions: 955 deletions: 205 files: 77 commits: 18 lines changed: 1160 234) Den V. insertions: 929 deletions: 556 files: 22 commits: 8 lines changed: 1485 235) Brendan W. insertions: 916 deletions: 279 files: 18 commits: 12 lines changed: 1195 236) Harrie Hazewinkel insertions: 902 deletions: 219 files: 19 commits: 13 lines changed: 1121 237) Jan Borsodi insertions: 881 deletions: 18 files: 13 commits: 12 lines changed: 899 238) Michael M insertions: 875 deletions: 49 files: 20 commits: 8 lines changed: 924 239) Olivier Hill insertions: 864 deletions: 1006 files: 59 commits: 7 lines changed: 1870 240) Maxim Maletsky insertions: 840 deletions: 774 files: 18 commits: 17 lines changed: 1614 241) Jan Kneschke insertions: 802 deletions: 7 files: 7 commits: 3 lines changed: 809 242) Jay Smith insertions: 759 deletions: 153 files: 32 commits: 18 lines changed: 912 243) Jaroslaw Kolakowski insertions: 734 deletions: 125 files: 14 commits: 8 lines changed: 859 244) Craig Duncan insertions: 728 deletions: 103 files: 42 commits: 16 lines changed: 831 245) Florian Anderiasch insertions: 711 deletions: 129 files: 70 commits: 21 lines changed: 840 246) David Walker insertions: 699 deletions: 87 files: 28 commits: 11 lines changed: 786 247) Sebastian Schürmann insertions: 690 deletions: 338 files: 23 commits: 20 lines changed: 1028 248) andrewnester <andrew.nester.dev@gmail.com> insertions: 689 deletions: 101 files: 47 commits: 13 lines changed: 790 249) donnut <erwin.poeze@gmail.com> insertions: 682 deletions: 31 files: 43 commits: 3 lines changed: 713 250) Lonny Kapelushnik insertions: 661 deletions: 191 files: 33 commits: 9 lines changed: 852 251) Andreas Treichel insertions: 657 deletions: 16 files: 31 commits: 5 lines changed: 673 252) Rainer Schaaf insertions: 654 deletions: 94 files: 12 commits: 8 lines changed: 748 253) jas- <jason.gerfen@gmail.com> insertions: 654 deletions: 1 files: 7 commits: 2 lines changed: 655 254) Gernot Vormayr insertions: 646 deletions: 12 files: 30 commits: 6 lines changed: 658 255) Moshe Doron insertions: 644 deletions: 33 files: 16 commits: 8 lines changed: 677 256) Jelle van insertions: 637 deletions: 8 files: 14 commits: 6 lines changed: 645 257) Andrey Andreev insertions: 629 deletions: 79 files: 35 commits: 5 lines changed: 708 258) Alex Waugh insertions: 615 deletions: 92 files: 34 commits: 20 lines changed: 707 259) Phil Driscoll insertions: 614 deletions: 148 files: 1 commits: 1 lines changed: 762 260) Ondřej Hošek insertions: 588 deletions: 0 files: 4 commits: 1 lines changed: 588 261) sskaje <sskaje@gmail.com> insertions: 585 deletions: 76 files: 9 commits: 8 lines changed: 661 262) Kristian Köhntopp insertions: 582 deletions: 43 files: 33 commits: 16 lines changed: 625 263) root <root@precise64.(none)> insertions: 571 deletions: 0 files: 13 commits: 5 lines changed: 571 264) Alex Leigh insertions: 570 deletions: 26 files: 6 commits: 4 lines changed: 596 265) =?UTF-8?q?Rouven=20We=C3=9Fling?= <me@rouvenwessling.de> insertions: 566 deletions: 112 files: 8 commits: 1 lines changed: 678 266) Douglas Goldstein insertions: 559 deletions: 241 files: 21 commits: 16 lines changed: 800 267) Rowan Collins insertions: 557 deletions: 153 files: 44 commits: 14 lines changed: 710 268) Edgar R. insertions: 541 deletions: 61 files: 40 commits: 18 lines changed: 602 269) Rouven Weßling insertions: 533 deletions: 685 files: 71 commits: 10 lines changed: 1218 270) Daniel Persson insertions: 503 deletions: 160 files: 19 commits: 9 lines changed: 663 271) Onn Ben-Zvi insertions: 503 deletions: 0 files: 6 commits: 2 lines changed: 503 272) Ben Longden insertions: 500 deletions: 0 files: 22 commits: 4 lines changed: 500 273) Alexander Feldman insertions: 497 deletions: 19 files: 18 commits: 10 lines changed: 516 274) Kai Schroeder insertions: 484 deletions: 57 files: 22 commits: 20 lines changed: 541 275) xKhorasan <xKhorasan@gmail.com> insertions: 471 deletions: 441 files: 5 commits: 1 lines changed: 912 276) Joe Orton insertions: 461 deletions: 199 files: 75 commits: 37 lines changed: 660 277) Thomas Punt insertions: 460 deletions: 349 files: 51 commits: 19 lines changed: 809 278) Patrick van insertions: 452 deletions: 284 files: 17 commits: 8 lines changed: 736 279) Tal Peer insertions: 450 deletions: 800 files: 49 commits: 35 lines changed: 1250 280) Anders Johannsen insertions: 444 deletions: 158 files: 2 commits: 2 lines changed: 602 281) David Hill insertions: 428 deletions: 400 files: 39 commits: 10 lines changed: 828 282) Elan Ruusamäe insertions: 426 deletions: 438 files: 7 commits: 4 lines changed: 864 283) Fabien Villepinte insertions: 415 deletions: 43 files: 26 commits: 10 lines changed: 458 284) Lars Westermann insertions: 404 deletions: 283 files: 24 commits: 21 lines changed: 687 285) Fredrik Öhrn insertions: 396 deletions: 42 files: 9 commits: 6 lines changed: 438 286) Brian France insertions: 390 deletions: 193 files: 35 commits: 26 lines changed: 583 287) ptarjan <ptarjan@fb.com> insertions: 390 deletions: 237 files: 151 commits: 7 lines changed: 627 288) Alexander Merz insertions: 386 deletions: 93 files: 10 commits: 9 lines changed: 479 289) Juan Basso insertions: 376 deletions: 122 files: 19 commits: 6 lines changed: 498 290) pwolanin <pwolanin@49851.no-reply.drupal.org> insertions: 366 deletions: 4 files: 6 commits: 1 lines changed: 370 291) Michał Brzuchalski insertions: 363 deletions: 116 files: 24 commits: 1 lines changed: 479 292) Tyson Andre insertions: 356 deletions: 10 files: 13 commits: 7 lines changed: 366 293) Andy Sautins insertions: 355 deletions: 580 files: 9 commits: 9 lines changed: 935 294) Keith Smiley insertions: 354 deletions: 7 files: 12 commits: 4 lines changed: 361 295) Lauri Kenttä insertions: 353 deletions: 124 files: 39 commits: 27 lines changed: 477 296) Chad Sikorra insertions: 353 deletions: 10 files: 16 commits: 10 lines changed: 363 297) Christian Wenz insertions: 349 deletions: 76 files: 7 commits: 6 lines changed: 425 298) Wei Dai insertions: 341 deletions: 85 files: 12 commits: 4 lines changed: 426 299) Valentin VALCIU insertions: 341 deletions: 1 files: 3 commits: 1 lines changed: 342 300) jhdxr <jhdxr@php.net> insertions: 336 deletions: 40 files: 23 commits: 10 lines changed: 376 301) Ondřej Surý insertions: 333 deletions: 727 files: 30 commits: 13 lines changed: 1060 302) Gustavo Frederico insertions: 317 deletions: 267 files: 8 commits: 7 lines changed: 584 303) Chuck Burgess insertions: 307 deletions: 0 files: 2 commits: 2 lines changed: 307 304) SammyK <sammyk@sammykmedia.com> insertions: 291 deletions: 18 files: 11 commits: 2 lines changed: 309 305) Tim Toohey insertions: 285 deletions: 69 files: 12 commits: 4 lines changed: 354 306) Will Fitch insertions: 275 deletions: 100 files: 22 commits: 13 lines changed: 375 307) Higor Eurípedes insertions: 275 deletions: 275 files: 1 commits: 1 lines changed: 550 308) Daniela Mariaschi insertions: 273 deletions: 120 files: 23 commits: 18 lines changed: 393 309) Kévin Dunglas insertions: 273 deletions: 33 files: 14 commits: 7 lines changed: 306 310) Ville Hukkamäki insertions: 272 deletions: 90 files: 17 commits: 4 lines changed: 362 311) Marco Pivetta insertions: 269 deletions: 74 files: 6 commits: 5 lines changed: 343 312) Ryan Biesemeyer insertions: 267 deletions: 138 files: 15 commits: 12 lines changed: 405 313) Manuel Mausz insertions: 264 deletions: 36 files: 11 commits: 5 lines changed: 300 314) Markus Staab insertions: 257 deletions: 258 files: 23 commits: 19 lines changed: 515 315) Philip Hofstetter insertions: 257 deletions: 3 files: 7 commits: 3 lines changed: 260 316) James E. insertions: 255 deletions: 127 files: 17 commits: 8 lines changed: 382 317) Gergely Madarász insertions: 255 deletions: 48 files: 16 commits: 7 lines changed: 303 318) Bishop Bettini insertions: 255 deletions: 14 files: 7 commits: 3 lines changed: 269 319) Joshua Thijssen insertions: 252 deletions: 9 files: 10 commits: 4 lines changed: 261 320) KoenigsKind <git@koenigskind.net> insertions: 250 deletions: 4 files: 7 commits: 1 lines changed: 254 321) John Donagher insertions: 249 deletions: 168 files: 20 commits: 14 lines changed: 417 322) Jakub Skopal insertions: 247 deletions: 88 files: 11 commits: 8 lines changed: 335 323) Boro Sitnikovski insertions: 245 deletions: 95 files: 35 commits: 8 lines changed: 340 324) jubianchi <contact@jubianchi.fr> insertions: 242 deletions: 42 files: 8 commits: 4 lines changed: 284 325) David Viner insertions: 241 deletions: 41 files: 10 commits: 5 lines changed: 282 326) reeze <reeze.xia@gmail.com> insertions: 235 deletions: 41 files: 12 commits: 6 lines changed: 276 327) Garrett Serack insertions: 230 deletions: 44 files: 19 commits: 11 lines changed: 274 328) Niklas Keller insertions: 228 deletions: 81 files: 27 commits: 9 lines changed: 309 329) Sherif Ramadan insertions: 227 deletions: 13 files: 16 commits: 3 lines changed: 240 330) Justin Erenkrantz insertions: 225 deletions: 240 files: 5 commits: 3 lines changed: 465 331) Jos Elstgeest insertions: 221 deletions: 3 files: 7 commits: 2 lines changed: 224 332) Tom Van insertions: 220 deletions: 9500 files: 204 commits: 8 lines changed: 9720 333) m.bennewitz <marc.bennewitz@unister.de> insertions: 220 deletions: 1 files: 8 commits: 2 lines changed: 221 334) John Boehr insertions: 218 deletions: 28 files: 25 commits: 6 lines changed: 246 335) Mattias Bengtsson insertions: 214 deletions: 41 files: 24 commits: 13 lines changed: 255 336) jfha73 <jfha73@gmail.com> insertions: 209 deletions: 209 files: 11 commits: 9 lines changed: 418 337) Christian Dickmann insertions: 197 deletions: 33 files: 20 commits: 13 lines changed: 230 338) nikita2206 <inefedor@gmail.com> insertions: 196 deletions: 83 files: 10 commits: 4 lines changed: 279 339) Adam Saponara insertions: 191 deletions: 76 files: 17 commits: 7 lines changed: 267 340) Mark Plomer insertions: 191 deletions: 83 files: 6 commits: 1 lines changed: 274 341) fabrice aeschbacher insertions: 190 deletions: 1 files: 2 commits: 1 lines changed: 191 342) Michael Orlitzky insertions: 185 deletions: 93 files: 22 commits: 15 lines changed: 278 343) root <marcosptf@yahoo.com.br> insertions: 182 deletions: 0 files: 3 commits: 2 lines changed: 182 344) Dorin Marcoci insertions: 179 deletions: 32 files: 13 commits: 7 lines changed: 211 345) Steven Lawrance insertions: 179 deletions: 91 files: 4 commits: 3 lines changed: 270 346) Trevor Suarez insertions: 174 deletions: 37 files: 11 commits: 6 lines changed: 211 347) Aaron Bannert insertions: 173 deletions: 53 files: 17 commits: 13 lines changed: 226 348) gron1987 <vlogvinskiy@cogniance.com> insertions: 171 deletions: 105 files: 7 commits: 2 lines changed: 276 349) David Zuelke insertions: 163 deletions: 20 files: 19 commits: 16 lines changed: 183 350) Matthew Trescott insertions: 162 deletions: 1 files: 4 commits: 1 lines changed: 163 351) Jille Timmermans insertions: 161 deletions: 15 files: 21 commits: 13 lines changed: 176 352) André Langhorst insertions: 159 deletions: 182 files: 23 commits: 10 lines changed: 341 353) Mark Baker insertions: 159 deletions: 4 files: 7 commits: 1 lines changed: 163 354) Joe Bylund insertions: 159 deletions: 25 files: 2 commits: 1 lines changed: 184 355) Eyal Teutsch insertions: 157 deletions: 109 files: 76 commits: 22 lines changed: 266 356) mcq8 <php@mcq8.be> insertions: 154 deletions: 84 files: 12 commits: 6 lines changed: 238 357) Marc Pohl insertions: 152 deletions: 0 files: 2 commits: 1 lines changed: 152 358) Mathieu Kooiman insertions: 151 deletions: 2 files: 7 commits: 6 lines changed: 153 359) Mauricio Vieira insertions: 148 deletions: 0 files: 4 commits: 2 lines changed: 148 360) Guilherme Blanco insertions: 147 deletions: 101 files: 35 commits: 5 lines changed: 248 361) Torben Wilson insertions: 147 deletions: 103 files: 10 commits: 4 lines changed: 250 362) Mark Jones insertions: 147 deletions: 32 files: 6 commits: 3 lines changed: 179 363) Bartosz Dziewoński insertions: 147 deletions: 3 files: 3 commits: 1 lines changed: 150 364) Arnout Boks insertions: 137 deletions: 13 files: 13 commits: 6 lines changed: 150 365) STANLEY SUFFICOOL insertions: 135 deletions: 111 files: 8 commits: 4 lines changed: 246 366) Paul Oehler insertions: 133 deletions: 13 files: 9 commits: 2 lines changed: 146 367) Philippe Verdy insertions: 132 deletions: 60 files: 8 commits: 9 lines changed: 192 368) Tianfang Yang insertions: 132 deletions: 110 files: 13 commits: 4 lines changed: 242 369) Michael Moravec insertions: 132 deletions: 34 files: 11 commits: 3 lines changed: 166 370) Zheng SHAO insertions: 130 deletions: 34 files: 8 commits: 5 lines changed: 164 371) Alan Brown insertions: 128 deletions: 58 files: 8 commits: 4 lines changed: 186 372) Magnus Määttä insertions: 128 deletions: 17 files: 5 commits: 2 lines changed: 145 373) Gavin Sherry insertions: 123 deletions: 33 files: 8 commits: 8 lines changed: 156 374) ju1ius <ju1ius@laposte.net> insertions: 123 deletions: 51 files: 11 commits: 2 lines changed: 174 375) Bouke van insertions: 123 deletions: 28 files: 4 commits: 1 lines changed: 151 376) Ted Rolle insertions: 122 deletions: 32 files: 4 commits: 3 lines changed: 154 377) Jeff Welch insertions: 119 deletions: 34 files: 41 commits: 10 lines changed: 153 378) Eric Stenson insertions: 118 deletions: 88 files: 24 commits: 2 lines changed: 206 379) David Reid insertions: 114 deletions: 35 files: 14 commits: 8 lines changed: 149 380) Marcel Araujo insertions: 114 deletions: 30 files: 5 commits: 2 lines changed: 144 381) Joe Martin insertions: 114 deletions: 2 files: 2 commits: 1 lines changed: 116 382) Alexander Zhuravlev insertions: 112 deletions: 16 files: 5 commits: 1 lines changed: 128 383) Edwin Hoksberg insertions: 111 deletions: 0 files: 5 commits: 1 lines changed: 111 384) kusano <kusano@users.noreply.github.com> insertions: 108 deletions: 2 files: 4 commits: 2 lines changed: 110 385) Jim Jagielski insertions: 107 deletions: 30 files: 12 commits: 7 lines changed: 137 386) Anthony Whitehead insertions: 107 deletions: 61 files: 5 commits: 3 lines changed: 168 387) Andreas Streichardt insertions: 107 deletions: 57 files: 6 commits: 1 lines changed: 164 388) Mateusz Kocielski insertions: 105 deletions: 19 files: 20 commits: 15 lines changed: 124 389) Sergey Akbarov insertions: 105 deletions: 4 files: 4 commits: 1 lines changed: 109 390) Jesus M. insertions: 104 deletions: 86 files: 6 commits: 6 lines changed: 190 391) theanomaly.is@gmail.com <googleguy@googleguy-virtualbox.(none)> insertions: 104 deletions: 10 files: 5 commits: 1 lines changed: 114 392) Martin Kraemer insertions: 103 deletions: 61 files: 65 commits: 18 lines changed: 164 393) Andrew Faulds insertions: 103 deletions: 1580 files: 30 commits: 7 lines changed: 1683 394) Christian Schmidt insertions: 103 deletions: 43 files: 12 commits: 2 lines changed: 146 395) Tjerk Anne insertions: 101 deletions: 101 files: 31 commits: 10 lines changed: 202 396) Matheus Degiovani insertions: 100 deletions: 4 files: 4 commits: 2 lines changed: 104 397) Abdul-Kareem Abo-Namous insertions: 98 deletions: 37 files: 4 commits: 2 lines changed: 135 398) Robin Gloster insertions: 98 deletions: 48 files: 4 commits: 1 lines changed: 146 399) Dmitri Iouchtchenko insertions: 98 deletions: 39 files: 2 commits: 1 lines changed: 137 400) Adam Gegotek insertions: 96 deletions: 0 files: 4 commits: 2 lines changed: 96 401) Ingmar Runge insertions: 96 deletions: 22 files: 5 commits: 2 lines changed: 118 402) Andreas Heigl insertions: 93 deletions: 0 files: 7 commits: 5 lines changed: 93 403) Ingo Walz insertions: 90 deletions: 8 files: 5 commits: 2 lines changed: 98 404) manuel <manuel@mausz.at> insertions: 89 deletions: 6 files: 8 commits: 3 lines changed: 95 405) Keyur <kgovande@etsy.com> insertions: 88 deletions: 30 files: 17 commits: 12 lines changed: 118 406) Richard Fussenegger insertions: 88 deletions: 108 files: 9 commits: 7 lines changed: 196 407) Sean DuBois insertions: 87 deletions: 44 files: 5 commits: 3 lines changed: 131 408) Ron Chmara insertions: 87 deletions: 24 files: 3 commits: 3 lines changed: 111 409) Andrew Curioso insertions: 86 deletions: 14 files: 8 commits: 2 lines changed: 100 410) Andrew Nester insertions: 86 deletions: 49 files: 4 commits: 1 lines changed: 135 411) Rathna N insertions: 85 deletions: 0 files: 5 commits: 1 lines changed: 85 412) Doug MacEachern insertions: 84 deletions: 24 files: 14 commits: 14 lines changed: 108 413) Robert Thompson insertions: 84 deletions: 30 files: 7 commits: 7 lines changed: 114 414) Wilfredo Sanchez insertions: 84 deletions: 55 files: 24 commits: 5 lines changed: 139 415) Brian Moon insertions: 84 deletions: 1 files: 4 commits: 3 lines changed: 85 416) Nicolas Grekas insertions: 84 deletions: 74 files: 10 commits: 2 lines changed: 158 417) c9s <yoanlin93@gmail.com> insertions: 83 deletions: 23 files: 5 commits: 5 lines changed: 106 418) Tim Starling insertions: 82 deletions: 29 files: 1 commits: 1 lines changed: 111 419) Scott <scott@paragonie.com> insertions: 81 deletions: 22 files: 9 commits: 7 lines changed: 103 420) husman <husman85@gamersdig.com> insertions: 81 deletions: 39 files: 3 commits: 3 lines changed: 120 421) Jeremy Mikola insertions: 81 deletions: 0 files: 7 commits: 2 lines changed: 81 422) Wenhui Zhang insertions: 80 deletions: 12 files: 3 commits: 1 lines changed: 92 423) Brian Bruns insertions: 79 deletions: 17 files: 4 commits: 2 lines changed: 96 424) David Matejka insertions: 79 deletions: 0 files: 3 commits: 1 lines changed: 79 425) Kirill Maximov insertions: 78 deletions: 23 files: 5 commits: 2 lines changed: 101 426) Paul Garvin insertions: 77 deletions: 22 files: 2 commits: 1 lines changed: 99 427) Andre Langhorst insertions: 75 deletions: 8 files: 5 commits: 5 lines changed: 83 428) Niklas Lindgren insertions: 75 deletions: 22 files: 5 commits: 1 lines changed: 97 429) Marc Easen insertions: 73 deletions: 73 files: 34 commits: 1 lines changed: 146 430) BohwaZ <bohwaz@github.com> insertions: 72 deletions: 11 files: 4 commits: 2 lines changed: 83 431) Moritz Fain insertions: 71 deletions: 8 files: 3 commits: 1 lines changed: 79 432) Willian Gustavo insertions: 70 deletions: 7 files: 8 commits: 4 lines changed: 77 433) Anton Blanchard insertions: 68 deletions: 179 files: 5 commits: 5 lines changed: 247 434) John Leach insertions: 68 deletions: 0 files: 1 commits: 1 lines changed: 68 435) John Jawed insertions: 67 deletions: 2 files: 4 commits: 2 lines changed: 69 436) Leo Feyer insertions: 67 deletions: 1 files: 4 commits: 1 lines changed: 68 437) Carlos André insertions: 66 deletions: 1 files: 5 commits: 2 lines changed: 67 438) William Felipe insertions: 64 deletions: 8 files: 5 commits: 5 lines changed: 72 439) Jim Zubov insertions: 64 deletions: 23 files: 12 commits: 5 lines changed: 87 440) Paul Annesley insertions: 64 deletions: 4 files: 16 commits: 4 lines changed: 68 441) Mariano Iglesias insertions: 64 deletions: 30 files: 5 commits: 2 lines changed: 94 442) Eitan Mosenkis insertions: 63 deletions: 4 files: 2 commits: 2 lines changed: 67 443) Grigorii Sokolik insertions: 63 deletions: 17 files: 4 commits: 1 lines changed: 80 444) Chris Jarecki insertions: 62 deletions: 25 files: 4 commits: 3 lines changed: 87 445) Ken Coar insertions: 62 deletions: 46 files: 3 commits: 2 lines changed: 108 446) Hieu Le insertions: 62 deletions: 35 files: 3 commits: 1 lines changed: 97 447) Sergei Morozov insertions: 60 deletions: 31 files: 13 commits: 2 lines changed: 91 448) Damjan Cvetko insertions: 60 deletions: 1 files: 3 commits: 1 lines changed: 61 449) Ole Markus insertions: 59 deletions: 29 files: 5 commits: 5 lines changed: 88 450) Marcus Bointon insertions: 59 deletions: 2 files: 2 commits: 1 lines changed: 61 451) Peter LeBrun insertions: 59 deletions: 2 files: 2 commits: 1 lines changed: 61 452) Joshua Rogers insertions: 58 deletions: 75 files: 19 commits: 12 lines changed: 133 453) root <phackwer@gmail.com> insertions: 58 deletions: 21 files: 4 commits: 1 lines changed: 79 454) Guenter Knauf insertions: 57 deletions: 246 files: 29 commits: 17 lines changed: 303 455) Bryan Hanks, insertions: 57 deletions: 52 files: 4 commits: 2 lines changed: 109 456) chance garcia insertions: 57 deletions: 0 files: 2 commits: 2 lines changed: 57 457) Marcelo Diniz insertions: 57 deletions: 0 files: 3 commits: 1 lines changed: 57 458) David Caldwell insertions: 56 deletions: 4 files: 4 commits: 4 lines changed: 60 459) SATO Kentaro insertions: 56 deletions: 49 files: 2 commits: 1 lines changed: 105 460) Pablo Santiago insertions: 56 deletions: 19 files: 3 commits: 1 lines changed: 75 461) zoe slattery insertions: 54 deletions: 44 files: 21 commits: 6 lines changed: 98 462) Sander Steffann insertions: 54 deletions: 6 files: 7 commits: 5 lines changed: 60 463) Mic <lenhatanh86@gmail.com> insertions: 54 deletions: 6 files: 4 commits: 2 lines changed: 60 464) Mike Gerdts insertions: 53 deletions: 10 files: 5 commits: 4 lines changed: 63 465) Willem-Jan <wjzijderveld@gmail.com> insertions: 53 deletions: 3 files: 5 commits: 4 lines changed: 56 466) Jefersson Nathan insertions: 52 deletions: 52 files: 5 commits: 6 lines changed: 104 467) Aaron Hamid insertions: 52 deletions: 0 files: 1 commits: 1 lines changed: 52 468) Bruce Weirdan insertions: 52 deletions: 1 files: 3 commits: 1 lines changed: 53 469) Sobak <msobaczewski@gmail.com> insertions: 51 deletions: 756 files: 27 commits: 10 lines changed: 807 470) Cameron Porter insertions: 51 deletions: 51 files: 7 commits: 2 lines changed: 102 471) Matthew Flaschen insertions: 51 deletions: 0 files: 2 commits: 1 lines changed: 51 472) Nikita Nefedov insertions: 50 deletions: 8 files: 6 commits: 2 lines changed: 58 473) Sébastien Santoro insertions: 50 deletions: 36 files: 3 commits: 2 lines changed: 86 474) Ivan Enderlin insertions: 49 deletions: 30 files: 16 commits: 3 lines changed: 79 475) Vince <github@darkain.com> insertions: 49 deletions: 0 files: 3 commits: 1 lines changed: 49 476) Martin Vobruba insertions: 48 deletions: 14 files: 12 commits: 2 lines changed: 62 477) Eddie Kohler insertions: 48 deletions: 1 files: 4 commits: 1 lines changed: 49 478) Cliff Woolley insertions: 47 deletions: 10 files: 6 commits: 5 lines changed: 57 479) Mats Lindh insertions: 47 deletions: 4 files: 5 commits: 4 lines changed: 51 480) Benjamin Robin insertions: 47 deletions: 41 files: 2 commits: 1 lines changed: 88 481) Richard Heyes insertions: 46 deletions: 0 files: 2 commits: 2 lines changed: 46 482) Tim Strehle insertions: 46 deletions: 9 files: 7 commits: 1 lines changed: 55 483) MiRacLe.RPZ <miracle@rpz.name> insertions: 45 deletions: 7 files: 4 commits: 4 lines changed: 52 484) Alexander Moskalev insertions: 45 deletions: 3 files: 6 commits: 3 lines changed: 48 485) rfussenegger <richard.fussenegger@trivago.com> insertions: 45 deletions: 2 files: 3 commits: 1 lines changed: 47 486) SeeSchloss <see@seos.fr> insertions: 44 deletions: 0 files: 27 commits: 1 lines changed: 44 487) Stricted <info@stricted.net> insertions: 43 deletions: 26 files: 27 commits: 2 lines changed: 69 488) Kubo2 <kelerest123@gmail.com> insertions: 43 deletions: 0 files: 2 commits: 1 lines changed: 43 489) Sean Fraser insertions: 43 deletions: 8 files: 2 commits: 1 lines changed: 51 490) Anil Madhavapeddy insertions: 42 deletions: 36 files: 14 commits: 10 lines changed: 78 491) Vektah <adam.scarr@99designs.com> insertions: 42 deletions: 35 files: 4 commits: 2 lines changed: 77 492) Ben Scholzen insertions: 41 deletions: 21 files: 8 commits: 1 lines changed: 62 493) Nayana Hettiarachchi insertions: 40 deletions: 7 files: 7 commits: 6 lines changed: 47 494) Chuan Ma insertions: 40 deletions: 36 files: 4 commits: 3 lines changed: 76 495) somedaysummer <info@timothytown.com> insertions: 40 deletions: 9 files: 5 commits: 2 lines changed: 49 496) dedal.qq <dedal.qq@gmail.com> insertions: 40 deletions: 39 files: 2 commits: 2 lines changed: 79 497) Pascal Borreli insertions: 39 deletions: 39 files: 12 commits: 1 lines changed: 78 498) julien.pons <julien.pons@mobpartner.com> insertions: 36 deletions: 1 files: 2 commits: 1 lines changed: 37 499) Matt Bonneau insertions: 36 deletions: 5 files: 3 commits: 1 lines changed: 41 500) pascalc <pascal.chevrel@free.fr> insertions: 35 deletions: 134 files: 2 commits: 2 lines changed: 169 501) Petr Sýkora insertions: 35 deletions: 0 files: 1 commits: 1 lines changed: 35 502) Ralf Lang insertions: 34 deletions: 22 files: 18 commits: 10 lines changed: 56 503) Yussuf Khalil insertions: 34 deletions: 8 files: 3 commits: 2 lines changed: 42 504) James Titcumb insertions: 33 deletions: 31 files: 32 commits: 3 lines changed: 64 505) Grundik <grundik@ololo.cc> insertions: 33 deletions: 3 files: 3 commits: 1 lines changed: 36 506) ALeX Kazik insertions: 33 deletions: 0 files: 6 commits: 1 lines changed: 33 507) Hugo Fonseca insertions: 33 deletions: 0 files: 1 commits: 1 lines changed: 33 508) ekinhbayar <ekin@coproductivity.com> insertions: 32 deletions: 3 files: 3 commits: 2 lines changed: 35 509) Oleg Efimov insertions: 32 deletions: 1 files: 4 commits: 1 lines changed: 33 510) EC2 Default insertions: 32 deletions: 0 files: 3 commits: 1 lines changed: 32 511) wapmorgan <wapmorgan@gmail.com> insertions: 32 deletions: 18 files: 2 commits: 1 lines changed: 50 512) Dejan Marjanovic insertions: 31 deletions: 7 files: 5 commits: 2 lines changed: 38 513) Ryan Bloom insertions: 31 deletions: 3 files: 3 commits: 2 lines changed: 34 514) Leo Baschy insertions: 31 deletions: 31 files: 4 commits: 2 lines changed: 62 515) Ludovico Magnocavallo insertions: 31 deletions: 27 files: 6 commits: 1 lines changed: 58 516) Sean Coates insertions: 30 deletions: 2 files: 3 commits: 3 lines changed: 32 517) Kevin Israel insertions: 30 deletions: 8 files: 2 commits: 1 lines changed: 38 518) mk-j <mark@zedwood.com> insertions: 30 deletions: 0 files: 2 commits: 1 lines changed: 30 519) Aidas Kasparas insertions: 30 deletions: 3 files: 1 commits: 1 lines changed: 33 520) y-uti <y.uchiyama.1015@gmail.com> insertions: 29 deletions: 4 files: 11 commits: 5 lines changed: 33 521) Benedict Singer insertions: 29 deletions: 17 files: 2 commits: 1 lines changed: 46 522) Amo Chohan insertions: 29 deletions: 0 files: 2 commits: 1 lines changed: 29 523) Kevin <kevin@php.net> insertions: 29 deletions: 18 files: 4 commits: 1 lines changed: 47 524) Peter Kokot insertions: 28 deletions: 31 files: 17 commits: 7 lines changed: 59 525) Chris Tankersley insertions: 28 deletions: 9 files: 4 commits: 1 lines changed: 37 526) Ken Guest insertions: 28 deletions: 0 files: 1 commits: 1 lines changed: 28 527) Benjamin W. insertions: 28 deletions: 1 files: 3 commits: 1 lines changed: 29 528) Paulo Eduardo insertions: 27 deletions: 0 files: 1 commits: 1 lines changed: 27 529) cyanogenmod <cm@cyanogenmod> insertions: 27 deletions
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Filling-and-manipulating-arrays
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Vectorizing-a-simulation
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
http://www.php.net/mcrypt_get_iv_size
PHP: mcrypt_get_iv_size - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box mcrypt_get_key_size » « mcrypt_get_cipher_name PHP Manual Function Reference Cryptography Extensions Mcrypt Mcrypt Functions Change language: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other mcrypt_get_iv_size (PHP 4 >= 4.0.2, PHP 5, PHP 7 < 7.2.0, PECL mcrypt >= 1.0.0) mcrypt_get_iv_size — Returns the size of the IV belonging to a specific cipher/mode combination Warning This function has been DEPRECATED as of PHP 7.1.0 and REMOVED as of PHP 7.2.0. Relying on this function is highly discouraged. Description mcrypt_get_iv_size ( string $cipher , string $mode ): int Gets the size of the IV belonging to a specific cipher / mode combination. It is more useful to use the mcrypt_enc_get_iv_size() function as this uses the resource returned by mcrypt_module_open() . Parameters cipher One of the MCRYPT_ciphername constants, or the name of the algorithm as string. mode One of the MCRYPT_MODE_modename constants, or one of the following strings: "ecb", "cbc", "cfb", "ofb", "nofb" or "stream". The IV is ignored in ECB mode as this mode does not require it. You will need to have the same IV (think: starting point) both at encryption and decryption stages, otherwise your encryption will fail. Return Values Returns the size of the Initialization Vector (IV) in bytes. On error the function returns false . If the IV is ignored in the specified cipher/mode combination zero is returned. Examples Example #1 mcrypt_get_iv_size() Example <?php echo mcrypt_get_iv_size ( MCRYPT_CAST_256 , MCRYPT_MODE_CFB ) . "\n" ; echo mcrypt_get_iv_size ( 'des' , 'ecb' ) . "\n" ; ?> See Also mcrypt_get_block_size() - Gets the block size of the specified cipher mcrypt_enc_get_iv_size() - Returns the size of the IV of the opened algorithm mcrypt_create_iv() - Creates an initialization vector (IV) from a random source Found A Problem? Learn How To Improve This Page • Submit a Pull Request • Report a Bug + add a note User Contributed Notes There are no user contributed notes for this page. Mcrypt Functions mcrypt_​create_​iv mcrypt_​decrypt mcrypt_​enc_​get_​algorithms_​name mcrypt_​enc_​get_​block_​size mcrypt_​enc_​get_​iv_​size mcrypt_​enc_​get_​key_​size mcrypt_​enc_​get_​modes_​name mcrypt_​enc_​get_​supported_​key_​sizes mcrypt_​enc_​is_​block_​algorithm mcrypt_​enc_​is_​block_​algorithm_​mode mcrypt_​enc_​is_​block_​mode mcrypt_​enc_​self_​test mcrypt_​encrypt mcrypt_​generic mcrypt_​generic_​deinit mcrypt_​generic_​init mcrypt_​get_​block_​size mcrypt_​get_​cipher_​name mcrypt_​get_​iv_​size mcrypt_​get_​key_​size mcrypt_​list_​algorithms mcrypt_​list_​modes mcrypt_​module_​close mcrypt_​module_​get_​algo_​block_​size mcrypt_​module_​get_​algo_​key_​size mcrypt_​module_​get_​supported_​key_​sizes mcrypt_​module_​is_​block_​algorithm mcrypt_​module_​is_​block_​algorithm_​mode mcrypt_​module_​is_​block_​mode mcrypt_​module_​open mcrypt_​module_​self_​test mdecrypt_​generic Copyright © 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Matplotlib
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://wiki.php.net/rfc/hash_pbkdf2?do=#dokuwiki__top
PHP: rfc:hash_pbkdf2 Login Register You are here: start › rfc › hash_pbkdf2 rfc:hash_pbkdf2 Request for Comments: Adding hash_pbkdf2 Function Version: 1.0 Date: 2012-06-13 Author: Anthony Ferrara ircmaxell@php.net Status: Implemented First Published at: http://wiki.php.net/rfc/hash_pbkdf2 This RFC proposes adding a hash_pbkdf2 function to the hash package Introduction The purpose of this RFC is to add the PBKDF2 algorithm to the available hashing functions as a C implementation. Why do we need PBKDF2? PBKDF2 is defined in RFC2898 as a method for implementing password based cryptographic needs. These needs can include password storage, password derivation into a key (for encryption) or secure signatures. Additionally, it's NIST Recommended for password storage. Adding a core implementation of the PBKDF2 algorithm will enable PHP projects to utilize a fast implementation of the algorithm, putting them on a more level ground for attackers. Since the C implementation is more efficient, more rounds can be computed for the same computational cost compared to a PHP land implementation. This enables higher iteration counts to be used, providing more security with less impact to the overall performance of the application. Projects and Software That Currently Use PBKDF2 WPA and WPA2 for key derivation from password OpenDocument encryption (OpenOffice.org) WinZip AES encryption 1Password LastPass Apple iOS Blackberry Backup Encryption Django Python Framework Recommended Parameters For PBKDF2 $algo The way hash_pbkdf2 is written, any currently supported hash_algos() algorithm can be used as the base for the algorithm. This means that it's up to the developer to choose the appropriate algorithm to use when using the function. Here are a few of the popular algorithms and some recomendations around them. It should be noted that any cryptographic hash algorithm that's supported can be used successfully with PBKDF2 ( CRC32 is *not* cryptographic, therefore it should not be used). SHA512 - This is currently one of the strongest algorithms available in PHP. It makes a good primitive for *hash_pbkdf2* SHA256 - This is also plenty strong enough for use as the basis for PBKDF2. A note on other popular algorithms: SHA1 and MD5 - Both are actually strong enough for effective use in PBKDF2. The reason is that the known attack vectors against the algorithm require knowledge of the input string being hashed. Therefore, an iterated algorithm such as PBKDF2 will be immune to the known attack vectors. That means it's OK to use for this task. With that said, the recommended approach is to use SHA512 or SHA256 instead, as the base algorithms are stronger. But it's not necessarily *bad* to use SHA1 or MD5 . $salt The salt parameter should be a random string containing at least 64 bits of entropy. That means when generated from a function like *mcrypt_create_iv*, at least 8 bytes long. But for salts that consist of only *a-zA-Z0-9* (or are base_64 encoded), the minimum length should be at least 11 characters. It should be generated random for each password that's hashed, and stored along side the generated key. $iterations The iterations parameter provides the ability to *tune* the algorithm for different servers and needs. For most web uses, a minimum value of *1000* is recommended. However, as hardware varies greatly, testing should be done to find an iteration count that yields a function runtime of between 0.1 and 0.5 seconds (depending again on application). On higher end servers, this can be as much as 20,000 to 50,000 iterations (also depending on the hash algo used). It's better to use the highest iteration count possible, as it will only increase the resistance to brute forcing. $length The length parameter indicates the length of the returned key. The default value for length is the length of the hash algo's output. However, this can be increased or decreased as necessary. For example, if you're using PBKDF2 to generate a password-based key for use in an encryption routine such as RIJNDAEL 256, which expects a 256 bit key, you would want to pass the length parameter as 256/8 (to get the byte length), and set *$raw_output* to *true*. $raw_output This parameter behaves just like the other *hash_* functions. If set to *true*, the function will return a binary string (chr 0-255). If set to *false*, the function will hex encode the result prior to returning it. Example Let's say you wanted to encrypt a file using a password. The password shouldn't be applied directly to the encryption function, but should be derived first. encryption.php <?php $password = "foo" ; $data = "testing this out" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $key = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 16 , true ) ; // $key will be full-byte 0-255 data   $iv = mcrypt_create_iv ( mcrypt_get_iv_size ( MCRYPT_RIJNDAEL_128 , MCRYPT_MODE_CBC ) , MCRYPT_DEV_URANDOM ) ;   $ciphertext = mcrypt_encrypt ( MCRYPT_RIJNDAEL_128 , $key , $data , MCRYPT_MODE_CBC , $iv ) ; ?> Or for storing passwords (BCrypt is recommended, but there are use-cases for PBKDF2, such as when NIST compliance is mandated): password.php <?php $password = "foo" ; $salt = mcrypt_create_iv ( 16 , MCRYPT_DEV_URANDOM ) ; $hash = hash_pbkdf2 ( "sha512" , $password , $salt , 5000 , 32 ) ;   // $hash will be a hex encoded string ?> Proposal and Patch The proposal is to add a hash_pbkdf2() function to the hash extension in core. The proposed function has a signature: string hash_pbkdf2(string algo, string password, string salt, int iterations [, int length = 0, bool raw_output = false]) The patch is available as a pull request to trunk. This RFC intends to add this functionality to master (5.5) only. Vote Vote begins on 2012/07/02 and ends on 2012/07/09. This vote is to include the new function in master only (5.5). rfc/hash_pbkdf2 Real name Yes? No? dragoonis   hradtke   ircmaxell   kriscraig   lynch   nikic   rasmus   shm   stas   Final result: 9 0 This poll has been closed. More about PBKDF2 RFC2898 WikiPedia NIST Recommendation - PDF A Reference Implementation In PHP Changelog 0.1 - Initial Version 0.2 - Proposed 0.3 - Added Parameter Information 0.4 - Reworded to target master only, removing 5.4 section 1.0 - Moving to Accepted state rfc/hash_pbkdf2.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show pagesource Old revisions Backlinks Back to top  Table of Contents Request for Comments: Adding hash_pbkdf2 Function Introduction Why do we need PBKDF2? Projects and Software That Currently Use PBKDF2 Recommended Parameters For PBKDF2 $algo $salt $iterations $length $raw_output Example Proposal and Patch Vote More about PBKDF2 Changelog Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Quick-plots
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Lists
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/transportation/
TIME for Kids | Transportation | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Transportation Community Getting Around August 23, 2024 There are many ways to get from place to place. How do you get where you need to go? Bicycle Bikes (above) move by pedaling. It is faster than walking. Some cities have lanes just for bikes. Car … Audio Community What is that Sign? March 25, 2022 Street signs and markings are all around us. It is important to pay attention to them. They help keep us safe. Learn about some of them here. Safe to Cross Crosswalks tell us where to cross the street. Wait Your… Audio Video Spanish Community On the Move March 25, 2022 How do roads appear? It is not by magic. A transportation engineer plans them. Yung Koprowski is a transportation engineer. “We make it safer and easier for people to travel,” she told TIME for Kids. Koprowski runs a company… Audio Video Technology Types of Trains February 27, 2020 A train is a set of railway cars. Some trains carry people. Others carry goods. Trains can move along one track or two. Ready to ride? Chug along with TIME for Kids as we discover different types of trains. This… Audio Spanish History From S-l-o-w to Fast! February 27, 2020 Trains have changed over time. They look different from how they used to look. They’ve gotten faster, too. Here’s how trains have changed. Then The DeWitt Clinton train began running in New York in 1831. It was powered by steam.… Audio Community Walk With Me August 30, 2019 What is a WALKING SCHOOL BUS? It is a group of students who walk to school together. An adult joins them on the trip. Walking school buses keep kids active. They keep kids safe. Plus, they help students get to… Technology Charging Ahead August 30, 2019 Do you ride a school bus? Most school buses are powered by gas. It harms the environment. Some people want to switch to ELECTRIC BUSES. These buses are powered by batteries. Electric buses are better for the environment. What do… Technology On the Water January 4, 2019 Boats and ships carry people and things across water. They are used for business. They are also used for fun! Here are some ways to travel on water. Have you ever been on a boat? This is a sailboat. Its… Audio Spanish Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Indexing-and-slicing
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr > 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr > arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr < 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr < 0.5 ] = arr [ arr < 0.5 ] + 0.5 # Or, even shorter: arr [ arr < 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: & | ~ ^ # Use parentheses around such expressions res = ( arr < 4 ) & ( arr > 1 ) print res print '--' res = ( arr < 2 ) | ( arr == 5 ) print res print '--' res = ( arr > 3 ) & ~ ( arr == 6 ) print res print '--' res = ( arr > 3 ) ^ ( arr < 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr < 0 ] = 0 arr [ arr > 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://www.timeforkids.com/quote#main-content
TIME for Kids | Get a Quote Skip to main content Get a Quote Subscribe Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit SUBSCRIBE NOW AND SAVE TODAY! Request a Quote Print & Digital Digital Only Quote Enter your Details Quotes are US only - for international quotes please contact us at tfksales@time.com . First name* Last name* School email* School ZIP code* Choose your Subscription Type Print & Digital Digital Only Mid-Year Pricing 2025-2026 School Year Price per student $3.75 TFK Edition Number of Students* Grades K-1 Grade 2 Grade 3-4 Grade 5-6 $0.00 Have Questions? Email us at tfksales@time.com . Grown-Ups Continue Navigate the news, together. Discover TIME for Kids for educators and families. Kids Explore Unlock a world of exciting stories. Start reading the news! News for You! Select your grade to begin reading. K-1 2 3-4 5-6 Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://aws.amazon.com/cloudfront/pricing/pay-as-you-go/
Amazon CloudFront CDN - Plans & Pricing - Try For Free Skip to main content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account Amazon CloudFront Overview Features Pricing Getting Started Resources More Networking and Content Delivery › Amazon CloudFront › Pay-as-you-go pricing Amazon CloudFront pay-as-you-go pricing Run fast and secure applications on AWS. Costs scale directly with usage and enabled features. Flat-rate pricing Connect with a specialist Why Amazon CloudFront? Any cacheable data transferred to CloudFront edge locations from AWS resources incurs no additional charge. CloudFront charges for data transfers out from its edge locations, along with HTTP or HTTPS requests. Pricing varies by usage type, geographical region, and feature selection; options are priced below. No-nonsense Free Tier As part of the  AWS free Usage Tier  you can get started with Amazon CloudFront for free. Included in Always Free Tier 1 TB of data transfer out to the internet per month 10,000,000 HTTP or HTTPS Requests per month 2,000,000 CloudFront Function invocations per month 2,000,000 CloudFront KeyValueStore reads per month 10 Distribution Tenants Free SSL certificates No limitations, all features available AWS Pricing Calculator Calculate your Amazon CloudFront and architecture cost in a single estimate. Create your custom estimate now Pricing components Data Transfer Out Free for origin fetches from any AWS origin such as Amazon Simple Storage Service (S3), Amazon Elastic Compute Cloud (EC2), or Elastic Load Balancers, including origins in private subnets through VPC origins. Regional Data Transfer Out to Internet (per GB) Per Month United States, Mexico, and Canada Europe, Israel, and Türkiye South Africa, Kenya, Nigeria, Egypt, and Middle East South America Japan Australia and New Zealand Hong Kong, Indonesia,  Philippines, Singapore, South Korea, Taiwan, Thailand, Malaysia, and Vietnam India First 1TB First 1TB Free First 1TB Free First 1TB Free First 1TB Free First 1TB Free First 1TB Free First 1TB Free First 1TB Free Next 9TB Next 9TB $0.085 Next 9TB $0.085 Next 9TB $0.110 Next 9TB $0.110 Next 9TB $0.114 Next 9TB $0.114 Next 9TB $0.120 Next 9TB $0.109 Next 40TB Next 40TB $0.080 Next 40TB $0.080 Next 40TB $0.105 Next 40TB $0.105 Next 40TB $0.089 Next 40TB $0.098 Next 40TB $0.100 Next 40TB $0.085 Next 100TB Next 100TB $0.060 Next 100TB $0.060 Next 100TB $0.090 Next 100TB $0.090 Next 100TB $0.086 Next 100TB $0.094 Next 100TB $0.095 Next 100TB $0.082 Per Month United States, Mexico, and Canada Europe, Israel, and Türkiye South Africa, Kenya, Nigeria, Egypt, and Middle East South America Japan Australia and New Zealand Hong Kong, Indonesia,  Philippines, Singapore, South Korea, Taiwan, Thailand, Malaysia, and Vietnam India Next 350TB Next 350TB $0.040 Next 350TB $0.040 Next 350TB $0.080 Next 350TB $0.080 Next 350TB $0.084 Next 350TB $0.092 Next 350TB $0.090 Next 350TB $0.080 Next 524TB Next 524TB $0.030 Next 524TB $0.030 Next 524TB $0.060 Next 524TB $0.060 Next 524TB $0.080 Next 524TB $0.090 Next 524TB $0.080 Next 524TB $0.078 Next 4PB Next 4PB $0.025 Next 4PB $0.025 Next 4PB $0.050 Next 4PB $0.050 Next 4PB $0.070 Next 4PB $0.085 Next 4PB $0.070 Next 4PB $0.075 Over 5PB Over 5PB $0.020 Over 5PB $0.020 Over 5PB $0.040 Over 5PB $0.040 Over 5PB $0.060 Over 5PB $0.080 Over 5PB $0.060 Over 5PB $0.072 Customers willing to make minimum traffic commits of typically 10 TB/month or higher are eligible for discounted pricing. Contact us Regional Data Transfer Out to Origin (per GB) Data United States, Mexico, and Canada Europe, Israel, and Türkiye South Africa, Kenya, Nigeria, Egypt, and Middle East South America Japan Australia and New Zealand Hong Kong, Indonesia,  Philippines, Singapore, South Korea, Taiwan, Thailand, Malaysia, and Vietnam India All Data Transfer $0.020 $0.020 $0.060 $0.125 $0.060 $0.080 $0.060 $0.160 Request Pricing for All HTTP Methods (per 10,000) Type of Request United States, Mexico, and Canada Europe, Israel, and Türkiye South Africa, Kenya, Nigeria, Egypt, and Middle East South America Japan Australia and New Zealand Hong Kong, Indonesia,  Philippines, Singapore, South Korea, Taiwan, Thailand, Malaysia, and Vietnam India First 10MM HTTP(S) requests Free Free Free Free Free Free Free Free HTTP requests $0.0075 $0.0090 $0.0090 $0.0160 $0.0090 $0.0090 $0.0090 $0.0090 HTTPS requests $0.0100 $0.0120 $0.0120 $0.0220 $0.0120 $0.0125 $0.0120 $0.0120 For pricing in China, please check the  China pricing page . Price Class Price classes provide you an option to lower the prices you pay to deliver content out of Amazon CloudFront. By default, Amazon CloudFront minimizes end-user latency by delivering content from its entire global network of edge locations. However, because we charge more where our costs are higher, this means that you pay more to deliver your content with low latency to end users in some locations. Price classes let you reduce your delivery prices by excluding Amazon CloudFront’s more expensive edge locations from your Amazon CloudFront distribution.  Amazon CloudFront will deliver your content from edge locations associated with the price class you selected. You will only be charged fees specific to the edge locations from which the content was actually delivered within the selected price class. From time to time, your content may be served from an edge location that is not included in your price class. In these cases, Amazon CloudFront will only charge you the rate for the least expensive location in your selected price class.  If performance is most important to you, you don’t need to do anything; your content will be delivered by our whole network of locations. However, if you wish to use another price class, you can configure your distribution through the AWS Management Console or via the Amazon CloudFront API. If you select a price class that does not include all locations, some of your viewers, especially those in geographic locations that are not in your price class, may experience higher latency than if your content were being served from all Amazon CloudFront locations. Price Class Edge Locations Included Within United States, Mexico, and Canada Europe, Israel, and Türkiye South Africa, Kenya, Nigeria, Egypt, and Middle East South America Japan Australia and New Zealand Hong Kong, Indonesia,  Philippines, Singapore, South Korea, Taiwan, Thailand,  Malaysia, and Vietnam India Price Class All Yes Yes Yes Yes Yes Yes Yes Yes Price Class 200 Yes Yes Yes x Yes x Yes Yes Price Class 100 Yes Yes x x x x x x Pricing components Edge Compute CloudFront Functions CloudFront Functions  is serverless scripting platform that allows you to run lightweight JavaScript code at CloudFront Edge locations.  Invocation pricing is $0.10 per 1 million invocations ($0.0000001 per invocation). You are charged for the total number of invocations across all your functions. CloudFront Functions counts an invocation each time it starts executing in response to a CloudFront event globally. CloudFront KeyValueStore CloudFront KeyValueStore is a global, low-latency key value data store that allows you to run lightweight compute with access to stateful data at CloudFront edge locations for improved latency, performance, and developer experience. The cost for reads within CloudFront Functions is $0.03 per 1 million reads (equivalent to $0.00000003 per read). Charges apply based on the overall number of reads across all your functions. CloudFront KeyValueStore counts the number of reads from within your function code each time there is a CloudFront Functions invocation. For any API actions not involving reads within CloudFront Functions, the cost is $1 per 1,000 API requests. Lambda@Edge Lambda@Edge  is a fully programmable, serverless edge computing environment for implementing a wide variety of complex customizations. Lambda@Edge functions are executed in a regional edge cache (usually in the AWS region closest to the CloudFront edge location reached by the client). You are charged for the total number of requests across all your functions. Lambda@Edge counts a request each time it starts executing in response to an Amazon CloudFront event globally. Request pricing is  $0.60 per 1 million requests  ($0.0000006 per request). Duration is calculated from the time your code begins executing until it returns or otherwise terminates. You are charged $0.00005001 for every GB-second used. For instance, if you allocate 128MB of memory available per execution with your Lambda@Edge function, then your duration charge will be $0.00000625125 for every 128MB-second used, metered in 1ms granularity. For instance, if you allocate 128 MB of memory available per invocation with your Lambda@Edge function, then your duration charge will be  $0.00000625125  for every  128 MB/second used . Note that Lambda@Edge functions are metered at a granularity of 1ms. There is no free tier for Lambda@Edge at this time. Edge Compute Pricing Info Price Requests $0.60 per 1M requests Duration $0.00005001 for every GB-second Pricing components Additional Features Origin Shield requests If you set up Origin Shield as a centralized caching layer, request fees are charged based on the AWS Region you have configured to be your Origin Shield Region and not based on the Amazon CloudFront edge location serving content. Origin Shield is charged as a request fee for each request that comes from another regional cache to your Origin Shield; see  Estimating Origin Shield costs  in the Amazon CloudFront Developer Guide. If you are interested in using Origin Shield in a multi-CDN architecture and have discounted pricing, additional charges may apply.  Contact us  or your AWS sales representative for more information. Origin Shield Request Pricing for All HTTP Methods (per 10,000) Requests United States Europe South America Japan Australia Singapore South Korea India Origin Shield Requests $0.0075 $0.0090 $0.0160 $0.0090 $0.0090 $0.0090 $0.0090 $0.0090 CloudFront SaaS Manager SaaS Manager charges you based on the number of Distribution Tenant resources you create. Distribution Tenants are new resources that inherit the configuration settings from a CloudFront Distribution, while also allowing for tenant-specific customizations. Pricing Tables Per Month Cost First 10 Distribution Tenants Free 11-200 Distribution Tenants $20 subscription fee Over 200 Distribution Tenants $0.10/ Distribution Tenant Standard Access logging: No additional charge for enabling standard access logs for CloudFront. You may incur charges for log delivery depending on the log destination you choose.  Amazon S3: There are no additional charges for log delivery to S3, though you incur Amazon S3 charges for storing and accessing the log files. You incur additional charges if you enable parquet conversion of logs as per ‘ Vended Logs - Format Converted to Apache Parquet ’ pricing. Amazon CloudWatch Logs (Standard and Infrequent Access): For each CloudFront request, you get 750 bytes of logs delivery to CloudWatch Logs at no additional charge. Any overages incur Amazon CloudWatch Logs charges as per the table below. Amazon Data Firehose: For log delivery to Amazon Data Firehose, you incur Amazon CloudWatch Logs charges as per the table below. You may incur additional Data Firehose charges for ingesting log data into Data Firehose. Please visit Amazon  Data Firehose pricing page .  Pricing Chart Data Ingested Delivery to CloudWatch Logs Standard Delivery to Kinesis Data Firehose Delivery to CloudWatch Logs Infrequent Access First 10TB per month $0.50 per GB        $0.25 per GB          $0.25 per GB Next 20TB per month $0.10 per GB        $0.075 per GB          $0.075 per GB Next 20TB per month $0.10 per GB        $0.075 per GB          $0.075 per GB Over 50TB per month $0.05 per GB        $0.05 per GB          $0.05 per GB Invalidation requests No additional charge for the first 1,000 paths requested for invalidation each month. Thereafter, $0.005 per path requested for invalidation. Note: A path listed in your invalidation request represents the URL (or multiple URLs if the path contains a wildcard character) of the object(s) you want to invalidate from CloudFront cache. For more information about invalidation, see  Invalidating Objects  in the Amazon CloudFront Developer Guide. Real-time log requests Real-time logs are charged based on the number of log lines that are generated. You pay $0.01 for every 1,000,000 log lines that CloudFront publishes to your log destination. Field Level Encryption requests Field-level encryption is charged based on the number of requests that need the additional encryption. You pay $0.02 for every 10,000 requests that CloudFront encrypts using field-level encryption in addition to the standard HTTPS request fee. Dedicated IP custom SSL You pay $600 per month for each custom SSL certificate associated with one or more CloudFront distributions using the Dedicated IP version of custom SSL certificate support. This monthly fee is pro-rated by the hour. For example, if you had your custom SSL certificate associated with at least one CloudFront distribution for just 24 hours (i.e. one day) in the month of June, your total charge for using the custom SSL certificate feature in June will be (one day / 30 days) * $600 = $20. For other SSL options, please visit the  CloudFront Custom SSL detail page. WebSocket and gRPC pricing Amazon CloudFront supports using both WebSocket and gRPC. Both are TCP-based protocols that are useful when you need long-lived bidirectional connections between clients and servers. There is no additional charge for sending data over WebSocket or gRPC protocols. Standard charges for using Amazon CloudFront apply. Origin server to Amazon CloudFront (origin fetches) Amazon CloudFront requires you to store the original, definitive version of your content in an origin server. With Amazon CloudFront, you can use an AWS service (e.g., Amazon S3, Amazon EC2, Elastic Load Balancing) or your own server as the origin server. You are responsible for the separate fees you accrue for your origin server. If you are using an AWS service as the origin for your content, data transferred from origin to edge locations (Amazon CloudFront origin fetches) are free of charge. This applies to data transfer from all AWS regions to all global CloudFront edge locations. Data transfer out from AWS services for all non-origin fetch traffic (such as multi-CDN traffic) to CloudFront will incur their respective regional data transfer out charges. Pricing for all AWS services is  available here . Amazon CloudFront to origin server Data transfer out of Amazon CloudFront to your origin server, such as POST and PUT requests, will be billed at the regional data transfer out to origin rates listed in the Regional Data Transfer Out to Origin (per GB) table above. This also includes WebSocket or gRPC traffic flowing from the client to a WebSocket or gRPC server. Anycast Static IPs Amazon CloudFront supports Anycast Static IPs to provide customers with a dedicated set of static IP addresses for connecting to their CloudFront distributions globally at a monthly fee of $3,000 per list. Standard charges for using Amazon CloudFront apply. Discounted pricing Free Tier Always free 1 TB of data transfer out 10,000,000 HTTP or HTTPS Requests 2,000,000 CloudFront Function Invocations Each month CloudFront Savings Bundle Amazon CloudFront charges traffic served based on the following dimensions: The CloudFront Security Savings Bundle is a flexible self-service pricing plan that helps you save up to 30% on your CloudFront bill in exchange for a monthly spend commitment for a one-year term. This savings is not limited to data delivered by CloudFront but applies to all CloudFront usage types, including CloudFront Functions and Lambda@Edge. The CloudFront Security Savings Bundle also includes free AWS Web Application Firewall (WAF) usage up to 10% of your committed amount. Custom Pricing Custom discounted pricing is available for customers willing to commit to a minimum of 10 TB of data transfer per month for 12 months or longer. Discounts vary based on the amount of the commitment. Interested in signing up for discounted pricing? Contact us Pricing example 1: Static website In this example, you are delivering a static website for a small production workload or testing your application. You have 100 GB of data egressing out to the internet from a CloudFront cache per month and make 1,000,000 HTTPS requests when fetching content from CloudFront and delivering to your viewer. You also use CloudFront Functions for lightweight processing of web requests, such as cache-key manipulation or URL rewrites. Assuming your account has less than 1 TB of data transfer out to internet and fewer than 20,000,000 total HTTPS request, DTO and HTTPS request will be covering by the  AWS Free Tier , incurring no charge. Your CloudFront distribution uses a viewer request and a viewer response function on each request. This would invoke two functions per request, no charge will be incurred by the first 2,000,000 request, then you will be charged at $0.1 per million requests. Pricing Chart Request Cost Calculation Total Cost 100 GB data transfer out 100 x $0.085 per GB $0 1,000,000 HTTPS requests 1,000,000 x $0.00 for the first 10,000,000 $0 12,000,000 viewer functions (12,000,000 – 2,000,000) x $0.01 per 1,000,000 requests $0.10 Sum Total Monthly Cost $0.10 Discount: You can save up to 30% on your CloudFront bill in exchange for a set monthly minimum spend over a one-year commitment. Note: If you are using an AWS origin, data transferred from origin to CloudFront edge locations will be free of charge. Pricing example 2: Dynamic e-commerce application You use CloudFront real-time logs to get information about requests made to a distribution in real time. You also need to invalidate objects from CloudFront Cache when there is an update to your website content. For Mexico, the data transfer out to internet is charged at $0.085 per GB after the first TB. HTTPS requests are charged at $0.01 per 10,000 requests after the first 20,000,000. Real-time logs are charged based on the number of log lines that are generated. You pay $0.01 for every 1,000,000 log lines that CloudFront publishes to your log destination; every request generates 1 log line. Finally, let’s assume you make a total of 2,000 invalidation requests per month for all your distributions. The first 1,000 invalidation paths that you submit per month are free. Thereafter, you will be charged $0.005 per path requested for invalidation. Pricing Chart Data Type Cost Calculation Total Cost 1 TB data transfer out 1 TB x $0 (1,000 x $0.085 per GB afterwards) $0 10,000,000 HTTPS requests 10,000,000 x $0 ($0.01 per 10,000 requests afterwards) $0 10,000,000 log lines 10,000,000 x $0.01 per 1,000,000 log lines $0.10 First 1,000 invalidation paths 1,000 x $0 per path (first 1,000 paths free) $0 Remaining 1,000 invalidation paths 1,000 x $0.005 per path $5 Sum Total Monthly Cost $5.10 Note: Data Transfer Out (DTO) charges from AWS services to CloudFront are $0/GB. What this means is that you can put CloudFront in front of AWS origins such as Application Load Balancers (ALB), AWS Elastic Beanstalk, Amazon S3, and other AWS resources to deliver HTTP(S) objects and save on DTO costs, roughly $77 in this example. Pricing example 3: Media streaming application When streaming video, you use a Lambda@Edge origin response trigger for response customization. You also use Origin Shield to reduce load on your origins by providing just-in-time packaging for live streams and on-the-fly image processing. For USA, the data transfer out to internet is charged at $0.085 per GB after the first TB. HTTPS requests are charged at $0.01 per 10,000 requests after the first 20,000,000. Let’s assume your Lambda@Edge function executed 60 million times in one month, and it ran for 10ms each time. L@E charges are calculated based on compute and requests. Monthly compute price is $0.00000625125 per 128 MB-second, and the monthly request price is $0.60 per 1 million requests. Origin Shield request pricing for origins configured in USA is $0.0075 per 10,000 HTTPS requests. Let’s assume the total number of dynamic requests going to Origin Shield is 10 percent of all your HTTPS requests: 10% x 200M = 20M. Pricing Chart Data Type Cost Calculation Total Cost 20,000GB Data transfer out (1 TB x $0)+ (19,000 x $0.085 per GB) $1615 200,000,000 HTTPS requests (10,000,000 x $0) + (190,000,000 x $0.01 per 10,000 requests) $190 60,000,000ms of Lambda@Edge compute costs 60,000,000ms x 0.01sec x $0.00000625125 per 128 MB-second $3.78 60,000,000 Lambda@Edge requests 60,000,000 x $0.60 per 1,000,000 requests $36 20,000,000 Origin Shield requests 20,000,000 x $0.0075 per 10,000 requests $15 Sum Total Monthly Cost $1,859.78 Pricing example 4: Software-as-a-Service platform In this example you are serving a multi-tenant application with 1,000 tenants and use Lambda@Edge origin response trigger for response customization. For USA, the data transfer out to internet is charged at $0.085 per GB after the first TB. HTTPS requests are charged at $0.01 per 10,000 requests after the first 20,000,000. Let’s assume your Lambda@Edge function executed 60 million times in one month, and it ran for 10ms each time. L@E charges are calculated based on compute and requests. Monthly compute price is $0.00000625125 per 128 MB-second, and the monthly request price is $0.60 per 1 million requests. For 1,000 tenants you are charged $20 subscription fee for the first 200 Distribution Tenants and then $0.10 per Distribution Tenant monthly. Pricing Chart Data Type Cost Calculation Total Cost 20,000GB Data transfer out (1 TB x $0)+ (19,000 x $0.085 per GB) $1,615 200,000,000 HTTPS requests (20,000,000 x $0) + (180,000,000 x $0.01 per 10,000 requests) $180 60,000,000ms of Lambda@Edge compute costs 60,000,000ms x 0.01sec x $0.00000625125 per 128 MB-second $3.75 60,000,000 Lambda@Edge requests 60,000,000 x $0.60 per 1,000,000 requests $36 1,000 Distribution Tenants $20 +(200 Distribution Tenants x $0) + (800 x $0.10 per Distribution Tenant) $100 Sum Total Monthly Cost $1,934.75 Additional pricing resources AWS Pricing Calculator Easily calculate your monthly costs with AWS Learn more Get pricing assistance Contact AWS specialists to get a personalized quote Learn more Get started Getting started Discover how to get started with Amazon CloudFront for free Visit the getting started page Getting started Ready to get started? Sign up Contact us Have more questions? Contact us Did you find what you were looking for today? Let us know so we can improve the quality of the content on our pages Yes No Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs & Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2026, Amazon Web Services, Inc. or its affiliates. All rights reserved.
2026-01-13T09:30:39
https://www.timeforkids.com/privacy-policy/
TIME for Kids | Privacy Policy Skip to main content Get a Quote Subscribe Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit SUBSCRIBE NOW AND SAVE TODAY! Privacy Policy This Privacy Policy was last updated on June 8, 2021. TABLE OF CONTENTS Note to Parents and Teachers about Our Data Collection from Kids The Information We Collect How We Use the Information We Collect Disclosure of Information. Third Party Websites Your California Privacy Rights: Notice to California Customers. Special Information for Nevada Residents. Your Opt-Out Choices Security of Personal Information. Retention of Personal Information. Changes to this Privacy Policy. How to Contact Us. Your privacy is important to us. This Privacy Policy describes how Time for Kids (“TFK,” “we,” or “our”) collects, uses, and discloses information when you interact with us, including via our website ( https://www.timeforkids.com/ ), mobile apps, email newsletters, online subscriptions, other product offerings, and any other services that display this Privacy Policy (collectively referred to as the “Services”). This Privacy Policy also applies to any offline data collection, such as the contact information you provide to create or update your print subscriptions. In this Policy, unless otherwise indicated, “personal information” means information that identifies, relates to, describes, is reasonably capable of being associated with, or could reasonably be linked, directly or indirectly, with a particular individual. Individuals from different countries or jurisdictions may have different rights with respect to their personal information. In particular, the European Union Privacy Policy applies to individuals in the European Union, United Kingdom, Switzerland, Norway, Lichtenstein, Iceland, Australia, and New Zealand in lieu of this Privacy Policy. We reserve the ability to limit our response to any request to exercise your rights based on the law that is applicable to you. Note to Parents and Teachers about Our Data Collection from Kids If you have questions about how TFK addresses the Children’s Online Privacy Protection Act (“COPPA”) (or US-state equivalents) or the Family Education Rights and Privacy Act (“FERPA”), please contact us using the contact information below. TFK is a teaching tool designed for teachers and their students, and parents and their children. TFK offers news and editorial content tailored to specific age ranges, companion materials such as quizzes and teaching aides, and the ability to take advantage of TFK’s partnerships with other companies that offer educational content for children. Parents and teachers create accounts with TFK to access these materials and may distribute them to students or children using the links that TFK provides and/or Google Classroom (teachers must have their own accounts with Google Classroom to take advantage of this integration). Parents and teachers can access their accounts here . We have taken steps numerous steps to minimize our data collection: TFK does not collect any information that personally identifies students or children, such as name. Nor are we capable of identifying students or children based on the information we do receive. Furthermore, the TFK website does not use third-party integrations, tracking technologies, or cookies that collect information to deliver personalized advertising. The TFK website uses Google Analytics to collect information about how users interact with the website. Having this information is important because it allows TFK to understand what works (or doesn’t) on the website. Google Analytics collects device identifiers, and if a child visits the TFK website on their own device or is given a device by a teacher or parent (for example, to read an article assigned by a teacher), Google Analytics will collect information about that device and how the user interacts with the website. The device identifiers Google Analytics collects do not personally identify children, and TFK does not use them as “persistent identifiers” capable of tracking a user across the internet. TFK makes its teaching aides and other resources available to parents and teachers as downloadable files or links to Google Forms. Teachers with Google Classroom accounts have the option to send a teaching aide to their Google Classroom account. In no event does TFK collect information about a student or child from these teaching aides, Google Forms, or the Google Classroom integration. Parents and teachers should consult Google’s privacy notices for more information about how Google may collect information from them. TFK may give parents and teachers the opportunity to use other companies’ services via links on the TFK website. While TFK offers links to those services, TFK does not collect any information from them about students or children. The Services do not display advertising or collect information to display targeted advertising on third-party websites or apps. The Information We Collect We collect the following types of personal information. Personal Identifiers : When you sign up for an account or make a purchase, we collect: Name, phone number, email address, contact address, and institutional affiliation (if you are a teacher). Credit card number (through a service provider). You have the option to store this information as part of your account. Commercial Information : When you engage in transactions with us, we create records of Services purchased and obtain payment information. Internet or Other Electronic Network Activity Information : We or external parties operating on our behalf collect information about the device you use to access our Services, such as your IP address, a device ID, and information about your web browser. Audio, electronic, visual, thermal, olfactory, or similar information : If you contact us via phone, we may record the call. We will notify you if a call is being recorded at the beginning of the call. Education information : We collect education information you voluntarily provide. Geolocation Data : We collect your IP address automatically when you use our Services, from which we or external parties operating on our behalf may be able to determine your general location. Information you give us about yourself or others : We collect information that you give us about yourself or others directly, including personal identifiers, such as name and other contact details; commercial information such as payment details and order history; and audio information if you contact us via telephone for customer service purposes. You give information to us directly when, for example, you register for the Services. You may give us personal information about other people, for example, when you fill out an order form and provide teacher names or sign up a friend for a gift subscription. Please ensure you have others’ permission to give us their personal information. The types of personal information that we may collect in this context include recipient’s name, address, e-mail address and telephone number. Please ensure that you notify the person whose information you are providing to us. Information we collect automatically : When you interact with our Services, we (and our partners who provide analytics services i.e, Google Analytics) automatically collect certain network or other activity information, including through the use of cookies, to provide you with the Services and to analyze your use of the Services. For example, we collect IP addresses and device identifiers, including general location data derived from your IP address, and information about the webpages you view and how you move through our Services, how you reached our Services, how you interact with our social media pages, and how you interact with our email communications. We use Google Analytics for these purposes. Google Analytics uses its own cookies. These cookies are only used to improve how our Services works. You can find out more information about Google Analytics cookies  here . You can prevent the use of Google Analytics relating to your use of our Services by downloading and installing the browser plugin available  here . TFK does not use third parties to collect information for advertising purposes. Please note that some of the information we request from you is required in order for you to use our Services. If you do not wish to provide such information to us, you are not obligated to, but as a result you may not be able to use the particular Service. However, some of the information we request from you is optional. This means that you can elect not to provide it to us and you will still be able to use the Service. If not all the information we request on a form is required, we may identify the required information to you such as by displaying an asterisk (*) next to the field where we request the required information. How We Use the Information We Collect We may use the information we collect for the following purposes: Provide the Services: Deliver the Services. Establish and administer your account, including conducting billing and invoicing, contacting you about the expiry of your subscription and sending you service messages about your subscription. For example, once you are subscribed, we may store your subscription information, including its start date, renewal date, pricing, publication, and any customer service contacts you may have with us. Authenticate access to your account. For example, you may provide us with a username and password to get access to your account. Perform maintenance and operations, including management of the network and devices supporting the Services and our systems. Provide technical support and assure quality of customer service interactions. Facilitate hardware and software upgrades for devices and systems. Enable your participation in surveys, sweepstakes, contests, and promotions. Communicate with You: Respond to your inquiries. Personalize communications. Send you service-related announcements, such as when we make changes to subscriber agreements, or to contact you about your account. Market our Services, such as by offering you products, programs or services that may be of interest to you, and keeping you informed of new happenings at TFK. Make Improvements to Our Products and Services: Identify and develop new products and services. Improve the design of our Services. Understand how our Services are used, including by creating measurement and analytics reports. Investigate Theft or Other Illegal Activities and Ensure a Secure Online Environment: Detect the unauthorized use or abuse of the Service. Protect you and other subscribers from fraudulent, abusive, unlawful use of, or subscription to, the Service. Protect our rights, our personnel, and our property. Comply with applicable law. We and our partners ( i.e., Google Analytics) use cookies, tags, pixels, web beacons, or other means of collecting information automatically from users for analytics purposes only. Disclosure of Information Publicly: If you choose to submit content (e.g., a “letter to the editor” or a comment or other submission on our social media pages), and you give us your consent, we may publish your name, screen name and the information you provided to us, which will be public. TFK’s website does not have community forums or other features that enabled user-generated submissions, and there are no means by which children can publicly post content. Affiliates: We may transfer information to our affiliates, including TIME USA, LLC for internal management and administrative purposes where necessary for the performance or conclusion of our contractual obligations to you or for your benefit. We may combine information from the Services together and with other information we obtain from our business records. Sale or Merger of Business : We may transfer to another entity or its affiliates or service providers some or all information about you in connection with, or during negotiations of, any merger, acquisition, sale of assets or any line of business, change in ownership control, or financing transaction. We cannot promise that an acquiring party or the merged entity will have the same privacy practices or treat your information the same as described in this Policy. Service Providers : We may disclose personal information to our service providers, which are companies that process information on our behalf, without your consent, to deliver the Service, and conduct business activities. We require our service providers to treat the information we disclose to them, or that they collect on our behalf, as confidential and to use the information only for the purposes for which they have been engaged. We engage with service providers to assist us with the following activities. Managing subscriptions, fulfilling orders, and delivering communications. Billing and payment processing, including billing and collection providers, such as payment processors and organizations that assist us in assessing your credit and payment status. Auditing and accounting. Professional consulting, such as law firms and firms that supply project-based resources and assistance. Analyzing web traffic. For more information, see our disclosure about our use of Google Analytics, above. We do not collect information from children for advertising purposes, and the device information we collect when a child uses our website does not personally identify the child. Securing our Services, such as entities that assist with security incident verification and response, service notifications, and fraud prevention. Managing information technology, such as entities that assist with website design, hosting, and maintenance, data and software storage, and network operation. There are limited circumstances in which the service provider collects data directly from you where its privacy policies may also apply. Authorities : We will disclose information we maintain when required to do so by law, for example, in response to a court order or a subpoena. We also may disclose such information in response to a law enforcement agency’s request. Third Party Websites Our Services provide links to third party websites or offerings where data privacy practices may be different to that of TFK. The inclusion of any link does not imply our endorsement of any other company, its websites, or its products and/or services. These linked websites or offerings have separate and independent privacy policies, which we recommend you read carefully. We have no control over such websites or offerings and therefore have no responsibility or liability for the manner in which the organizations that operate such linked websites or offerings may collect, use, disclose, or otherwise treat your personal information. Your California Privacy Rights: Notice to California Customers California Shine the Light : California’s “Shine the Light” law, Civil Code Section 1798.83, requires businesses that disclose personal information to third parties for those third parties’ direct marketing purposes to give California customers the ability to opt-out of such disclosure. TFK does not disclose the personal information it collects on the Services to third parties for their direct marketing purposes. California Consumer Privacy Act (“CCPA”): The California Consumer Privacy Act (“CCPA”) grants residents of California certain rights with respect to their personal information and requires us to provide such individuals with certain information, as described in this section. Your Rights : Transparency . At the time we collect personal information, you have the right to receive notice of the categories of personal information we collect, and the purposes for which those categories of personal information will be used. Access/Right to Know . You have the right to request access to personal information we collected about you and information regarding the source of that personal information, the purposes for which we collect it, and the third parties and service providers with whom we share it. You can make this access request by going to our California Access/Deletion Page . If you are a print subscriber, you can also access and update much of the personal information we have collected about you through your account page . Deletion . You have the right to request that we erase data we have collected from you. Please note that we may have a reason to deny your deletion request or delete data in a more limited way than you anticipated, e.g., because of a legal obligation to retain it or to provide a good or service that you request. You can make this deletion request by going to our California Access/Deletion Page . Opt-Out of Sale : You have the right to request that we stop “selling” your personal information as that that term is defined in the California Consumer Privacy Act. A “sale” of personal information is defined broadly: “selling, renting, releasing, disclosing, disseminating, making available, transferring, or otherwise communicating orally, in writing, or by electronic or other means, a consumer’s personal information by the business to another business or a third party for monetary or other valuable consideration.” TFK does not sell personal information it collects on the Services. Categories of personal information we collect . We collect the categories of information described above in the “ Information We Collect ” section for the purposes described in the “ How We Use Information ” section. Categories of personal information we disclose . We may disclose any of the categories of personal information listed above and use them for the above-listed purposes or for other business or operational purposes compatible with the context in which the personal information was collected. Our disclosures of personal information include disclosures to our “service providers,” which are companies that we engage for business purposes to conduct activities on our behalf. The categories of service providers with whom we share information and the services they provide are described above. Categories of personal information we “sell” . We do not sell personal information we collect on the Services. You can also make a CCPA access or deletion request by calling the following toll-free number: +1 (888) 914-9661 PIN 430210. Special Information for Nevada Residents Residents of the State of Nevada have the right to opt out of the sale of certain pieces of their information to other companies who will sell or license their information to others. We do not sell personal information we collect on the Services. If you are a Nevada resident and would like more information about our data sharing practices, please contact us using the information below. Your Opt-Out Choices Verification Procedures : We must verify your identity for everyone’s protection. To do so, we may require you to provide us with verification information prior to accessing any records containing personal information about you. We do this by: Asking you to provide personal identifiers we can match against information we may have collected from you previously and confirm your request using the email or telephone number stated in the request; or Having you submit your request through your account page (if you are a subscriber), which will automatically verify your identity and will result in faster processing of your request. If you are a California resident, you may authorize another individual or a business registered with the California Secretary of State, called an authorized agent, to make requests on your behalf. If we receive a request from an authorized agent, we require that the authorized agent verify its identity with us, that you verify your identity with us, and that you or the agent provide us with proof that you authorized the authorized agent in writing to make the request. We will use the information you provide for verification only for the purpose of verification. We may have a reason under the law why we do not have to respond to your request, or respond to it in a more limited way than you anticipated. If we do, we will explain that to you in our response. Your Account and Payment Information : If you are a print subscriber, please visit the Manage Account page to update your contact information and payment method. Marketing and Other Communications : If you wish to unsubscribe from receiving future email marketing from TFK, please use the unsubscribe link that appears at the bottom of our marketing emails. If you are a newsletter subscriber, you may unsubscribe using the unsubscribe options in the newsletter emails. If you prefer not to receive traditional mail or other offline promotions from TFK, please email us here . Please note that even if you opt out of receiving marketing communications from us, we may continue to send you transactional communications about your subscription or orders. Cookie Management and “Do Not Track” : You can control cookies using your web browser’s settings. If you delete your cookies or if you set your browser to decline cookies, some features of the Services may not be available, work, or work as designed. Google Analytics uses its own cookies. These cookies are only used to improve how our Services work. You can find out more information about Google Analytics cookies  here . You can prevent the use of Google Analytics relating to your use of our Services by downloading and installing the browser plugin available  here . If you delete your cookies, you may also delete your opt-out preferences. Your browser or device may include “Do Not Track” functionality. At this time, TFK does not respond to browser “Do Not Track” signals. Security of Personal Information We have put in place administrative, technical, and physical safeguards to help prevent unauthorized access, maintain data security and correctly use the information we collect. No system can be completely secure, however, and we do not guarantee that unauthorized disclosures and access will not happen. Retention of Personal Information We may retain information, including personal information we collect about you, for different periods of time consistent with the purposes of processing we describe in How We Use the Information We Collect and whether we are subject to a legal obligation to retain certain records. Changes to this Privacy Policy This Privacy Policy may be updated periodically to reflect changes to our information practices. The revised Privacy Policy will be posted on this website with the date of the last modification, and we will notify of the changes if required by the applicable law. We will treat your information in accordance with the privacy policy in place at the time of collection of such information, or as you otherwise indicate your preferences. We encourage you to check whenever you use our Services to see if the policy has been updated. How to Contact Us If you have any questions or concerns about this Privacy Policy, our data practices, or our compliance with applicable law please contact us by writing to us at privacy@time.com , or at: Time USA, LLC Attention: Privacy Officer 3 Bryant Park New York, NY 10036 Toll-free: 800-843-8463 If you are a print subscriber located in the U.S. and have questions about your subscription, please contact us by writing to us at tfkcustserv@cdsfulfillment.com or at: Time Customer Service Attention: Consumer Affairs 3000 University Center Drive Tampa, Florida 33612-6408 Toll-free: 1-877-604-8017 Grown-Ups Continue Navigate the news, together. Discover TIME for Kids for educators and families. Kids Explore Unlock a world of exciting stories. Start reading the news! News for You! Select your grade to begin reading. K-1 2 3-4 5-6 Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
http://eigenhombre.com/marginalia-hacks.html
Marginalia Hacks Marginalia Hacks John Jacobsen This is the sixth and final post in a series on my Clojure workflow. In my last post, I introduced Marginalia as a tool for (semi-)literate programming. Here are some tricks I’ve used to make Marginalia work for me — in particular, to support a style of working with investigatory “notebooks.” As always, your mileage may vary. Problem: I want to reorder my code snippets to allow for more natural exposition. Solution : As discussed, Marginalia does not provide reordering or interpolation of source code in the same way that Knuth’s WEB does. By default, lein marg processes all the Clojure source code in your project except in the test directory, presenting namespaces in alphabetical order. The problem is exacerbated by the one-pass Clojure compiler, which expects everything to be declared before it is used.I have been able to work around this to my satisfaction by specifying directories and/or files at the command line in the order I want them to appear. For example, if I wanted both src and test files in my output, and if I wanted src/myproject/core.clj to appear first, I would say lein marg src/myproject/core.clj src test If I wanted to reorder forms within core.clj , I could also just use Clojure’s declare macro to forward-declare the vars at the top of the file. This is far from the power of Knuth’s WEB, but it’s been good enough for me. Problem: I want to see my Marginalia output as soon as I save my source code. Solution : It’s nice to have quick feedback, so I use conttest to run lein marg , plus some Applescript (or equivalent) to reload the output in the browser. Example: conttest 'lein marg && \ osascript ~/bin/reload-browser.scpt \ file:///path/to/project/docs/uberdoc.html' The Applescript ~/bin/reload-browser.scpt is fairly simple, though you may have to adjust to suit your browser of choice: on run argv tell application “Google Chrome” set URL of active tab of first window to item 1 of argv end tell end run Though it does take a few seconds, Marginalia not being a speed demon, one can get pretty quick visual feedback using this approach. Problem: I want to show expressions and the results of their evaluation together, a la iPython Notebook, Mathematica, Maple, Gorilla REPL, or Session. Solution : For Clojure evaluation, I use the cider-eval-last-sexp-and-append trick I described in my previous post on customizing Emacs. This results in something like the following, in Emacs and in Marginalia: Here I added a single quote ( ' ) to keep the resulting form from throwing an exception when the buffer is recompiled. This not quite iPython Notebook, but I find it gets me surprisingly far. And the source code remains completely usable as a whole, standalone program. This means I can combine notebook-style investigations directly in a working project without worrying how, eventually, to get the annotated code into production. Problem: I want nice looking math formulae in my “notebook.” Solution : Use the MathJax JavaScript library. As shown in the images from the previous post, math can be typeset quite nicely with MathJax. This can either be imported directly from a CDN, as follows: ;; <script type="text/javascript” ;; src="http://cdn.mathjax.org/mathjax/latest/MathJax.js?config=TeX-AMS-MML_HTMLorMML"> ;; </script&gt (this being done directly in the Clojure source file, comments being rendered as Markdown, and therefore HTML). Or, copy down the MathJax JavaScript source and use the -j argument to lein marg . It took me a little time to figure out how to properly escape the TeX for inline math formulae. The usual pattern in HTML is to use \( and \) , as follows: \(r^2 = x^2 + y^2\) yielding \(r^2 = x^2+y^2 \) inline. For this to work correctly, Marginalia makes you add another \ : \\(r^2 = x^2 + y^2\\) For inset math formulae, the required $$ is unchanged: $$r^2 = x^2 + y^2$$ Which gives: $$r^2 = x^2 + y^2.$$ Problem: I want to show graphs along with my text, code, and mathematics. Solution : Use a JavaScript plotting library, and a little Clojure to prepare the data. This last hack is perhaps the most fun and the most “hacky” of the bunch. One of the best features of notebook solutions like iPython Notebook is the ability to show graphs inline with the code that generates them. This is not really in the wheelhouse of Marginalia, which was meant as a static documentation tool, but since we can incorporate JavaScript (as seen above, for mathematics), we can leverage existing plotting libraries. I use i3d3 , an open-source JavaScript plotting library built on top of d3.js . The only obvious difficulty is how to get the data points into the browser for JavaScript to plot. For this, we need to do the following: Using the REPL, capture the Clojure data, format it as JavaScript, and write to a disk file in the project, local.js . Load the resulting local.js, as well as any other needed libraries (in my case, d3, i3d3, and underscore.js ) as part of the Marginalia command. The Clojure for Step 1 is shown in this gist . The i3d3 function, evaluated in the REPL, does the work of preparing the data on disk. The intermediate JavaScript file looks something like this: // BEGIN DIV plot2 i3d3.plot({"ylabel":"Entries", “xlabel":"Priorities", “size":[700,250], “data":[{"type":"bars", “bins":[93991,103924,3396], “color":"grey", “range":[0,4]}], “div":"plot2"}); // END DIV plot2 (Multiple DIVs are supported in a single file by changing the DIV ID for each i3d3 function call in the REPL). The command to continuously run Marginalia (Step 2) is: conttest “lein marg src/liveana/core.clj \ -c style.css \ -j 'd3.v3.min.js;underscore-min.js;i3d3.js;local.js' \ && osascript ~/bin/reload-browser.scpt \ file://path/to/docs/uberdoc.html” Here I have put the JavaScript libraries in the docs/ directory in advance; also, since i3d3 benefits from a style sheet, that is prepared and included in the Marginalia shell command as well. Here’s an example plot, from this notebook : I told you it was hacky, but give the example a whirl anyways. Since i3d3 supports panning and zooming, that comes for free! Thanks to John Kelley / WIPAC for permission to show this work in this post. As tools like Gorilla REPL and Session become more popular and powerful, I may discard this way of injecting graphs into “literate” programs. But I did want to see how far I could push Marginalia as a Clojure-based substitute for iPython Notebook, and found this approach surprisingly powerful. I might package it into something a bit more off-the-shelf if anyone else shows interest. This concludes the series of posts on Clojure workflows—thanks to any of you who made it this far! The Clojure tooling landscape is constantly shifting, and I continue to learn new tricks, so things may look different a year from now. In the mean time, perhaps some people will find something helpful here. Happy hacking! about | all posts © 2016 John Jacobsen . Created with unmark . CSS by Tufte-CSS .
2026-01-13T09:30:39
https://llvm.org/doxygen/classOutputBuffer.html#a0346c2f31ef3c3bc8c8d9ab98a18f077
LLVM: OutputBuffer Class Reference LLVM  22.0.0git Public Member Functions | Public Attributes | List of all members OutputBuffer Class Reference #include " llvm/Demangle/Utility.h " Public Member Functions   OutputBuffer ( char *StartBuf, size_t Size )   OutputBuffer ( char *StartBuf, size_t *SizePtr)   OutputBuffer ()=default   OutputBuffer ( const OutputBuffer &)=delete OutputBuffer &  operator= ( const OutputBuffer &)=delete virtual  ~OutputBuffer ()=default   operator std::string_view () const virtual void  printLeft ( const Node & N )   Called by the demangler when printing the demangle tree. virtual void  printRight ( const Node & N ) virtual void  notifyInsertion (size_t, size_t)   Called when we write to this object anywhere other than the end. virtual void  notifyDeletion (size_t, size_t)   Called when we make the CurrentPosition of this object smaller. bool   isInParensInTemplateArgs () const   Returns true if we're currently between a '(' and ')' when printing template args. bool   isInsideTemplateArgs () const   Returns true if we're printing template args. void  printOpen ( char Open='(') void  printClose ( char Close=')') OutputBuffer &  operator+= (std::string_view R) OutputBuffer &  operator+= ( char C ) OutputBuffer &  prepend (std::string_view R) OutputBuffer &  operator<< (std::string_view R) OutputBuffer &  operator<< ( char C ) OutputBuffer &  operator<< (long long N ) OutputBuffer &  operator<< ( unsigned long long N ) OutputBuffer &  operator<< (long N ) OutputBuffer &  operator<< ( unsigned long N ) OutputBuffer &  operator<< (int N ) OutputBuffer &  operator<< ( unsigned int N ) void  insert (size_t Pos, const char *S, size_t N ) size_t  getCurrentPosition () const void  setCurrentPosition (size_t NewPos) char   back () const bool   empty () const char *  getBuffer () char *  getBufferEnd () size_t  getBufferCapacity () const Public Attributes unsigned   CurrentPackIndex = std::numeric_limits< unsigned >::max()   If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. unsigned   CurrentPackMax = std::numeric_limits< unsigned >::max() struct {      unsigned     ParenDepth = 0    The depth of '(' and ')' inside the currently printed template arguments. More...     bool     InsideTemplate = false    True if we're currently printing a template argument. More... }  TemplateTracker Detailed Description Definition at line 34 of file Utility.h . Constructor & Destructor Documentation ◆  OutputBuffer() [1/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t Size  ) inline Definition at line 75 of file Utility.h . References Size . Referenced by operator+=() , operator+=() , operator<<() , operator<<() , operator<<() , operator<<() , operator<<() , operator<<() , operator<<() , operator<<() , operator=() , OutputBuffer() , OutputBuffer() , and prepend() . ◆  OutputBuffer() [2/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t * SizePtr  ) inline Definition at line 77 of file Utility.h . References OutputBuffer() . ◆  OutputBuffer() [3/4] OutputBuffer::OutputBuffer ( ) default ◆  OutputBuffer() [4/4] OutputBuffer::OutputBuffer ( const OutputBuffer & ) delete References OutputBuffer() . ◆  ~OutputBuffer() virtual OutputBuffer::~OutputBuffer ( ) virtual default Member Function Documentation ◆  back() char OutputBuffer::back ( ) const inline Definition at line 213 of file Utility.h . References DEMANGLE_ASSERT . ◆  empty() bool OutputBuffer::empty ( ) const inline Definition at line 218 of file Utility.h . ◆  getBuffer() char * OutputBuffer::getBuffer ( ) inline Definition at line 220 of file Utility.h . Referenced by llvm::dlangDemangle() , removeNullBytes() , and llvm::ThinLTOCodeGenerator::writeGeneratedObject() . ◆  getBufferCapacity() size_t OutputBuffer::getBufferCapacity ( ) const inline Definition at line 222 of file Utility.h . ◆  getBufferEnd() char * OutputBuffer::getBufferEnd ( ) inline Definition at line 221 of file Utility.h . ◆  getCurrentPosition() size_t OutputBuffer::getCurrentPosition ( ) const inline Definition at line 207 of file Utility.h . Referenced by decodePunycode() , llvm::dlangDemangle() , and removeNullBytes() . ◆  insert() void OutputBuffer::insert ( size_t Pos , const char * S , size_t N  ) inline Definition at line 194 of file Utility.h . References DEMANGLE_ASSERT , N , and notifyInsertion() . Referenced by decodePunycode() . ◆  isInParensInTemplateArgs() bool OutputBuffer::isInParensInTemplateArgs ( ) const inline Returns true if we're currently between a '(' and ')' when printing template args. Definition at line 118 of file Utility.h . References TemplateTracker . ◆  isInsideTemplateArgs() bool OutputBuffer::isInsideTemplateArgs ( ) const inline Returns true if we're printing template args. Definition at line 123 of file Utility.h . References TemplateTracker . Referenced by printClose() , and printOpen() . ◆  notifyDeletion() virtual void OutputBuffer::notifyDeletion ( size_t , size_t  ) inline virtual Called when we make the CurrentPosition of this object smaller. Definition at line 100 of file Utility.h . Referenced by setCurrentPosition() . ◆  notifyInsertion() virtual void OutputBuffer::notifyInsertion ( size_t , size_t  ) inline virtual Called when we write to this object anywhere other than the end. Definition at line 97 of file Utility.h . Referenced by insert() , and prepend() . ◆  operator std::string_view() OutputBuffer::operator std::string_view ( ) const inline Definition at line 86 of file Utility.h . ◆  operator+=() [1/2] OutputBuffer & OutputBuffer::operator+= ( char C ) inline Definition at line 145 of file Utility.h . References C() , and OutputBuffer() . ◆  operator+=() [2/2] OutputBuffer & OutputBuffer::operator+= ( std::string_view R ) inline Definition at line 136 of file Utility.h . References OutputBuffer() , and Size . ◆  operator<<() [1/8] OutputBuffer & OutputBuffer::operator<< ( char C ) inline Definition at line 168 of file Utility.h . References C() , and OutputBuffer() . ◆  operator<<() [2/8] OutputBuffer & OutputBuffer::operator<< ( int N ) inline Definition at line 186 of file Utility.h . References N , and OutputBuffer() . ◆  operator<<() [3/8] OutputBuffer & OutputBuffer::operator<< ( long long N ) inline Definition at line 170 of file Utility.h . References N , and OutputBuffer() . ◆  operator<<() [4/8] OutputBuffer & OutputBuffer::operator<< ( long N ) inline Definition at line 178 of file Utility.h . References N , and OutputBuffer() . ◆  operator<<() [5/8] OutputBuffer & OutputBuffer::operator<< ( std::string_view R ) inline Definition at line 166 of file Utility.h . References OutputBuffer() . ◆  operator<<() [6/8] OutputBuffer & OutputBuffer::operator<< ( unsigned int N ) inline Definition at line 190 of file Utility.h . References N , and OutputBuffer() . ◆  operator<<() [7/8] OutputBuffer & OutputBuffer::operator<< ( unsigned long long N ) inline Definition at line 174 of file Utility.h . References N , and OutputBuffer() . ◆  operator<<() [8/8] OutputBuffer & OutputBuffer::operator<< ( unsigned long N ) inline Definition at line 182 of file Utility.h . References N , and OutputBuffer() . ◆  operator=() OutputBuffer & OutputBuffer::operator= ( const OutputBuffer & ) delete References OutputBuffer() . ◆  prepend() OutputBuffer & OutputBuffer::prepend ( std::string_view R ) inline Definition at line 151 of file Utility.h . References notifyInsertion() , OutputBuffer() , and Size . ◆  printClose() void OutputBuffer::printClose ( char Close = ')' ) inline Definition at line 130 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . ◆  printLeft() void OutputBuffer::printLeft ( const Node & N ) inline virtual Called by the demangler when printing the demangle tree. By default calls into Node::print {Left|Right} but can be overriden by clients to track additional state when printing the demangled name. Definition at line 6202 of file ItaniumDemangle.h . References N . ◆  printOpen() void OutputBuffer::printOpen ( char Open = '(' ) inline Definition at line 125 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . ◆  printRight() void OutputBuffer::printRight ( const Node & N ) inline virtual Definition at line 6204 of file ItaniumDemangle.h . References N . ◆  setCurrentPosition() void OutputBuffer::setCurrentPosition ( size_t NewPos ) inline Definition at line 208 of file Utility.h . References notifyDeletion() . Referenced by llvm::dlangDemangle() , and removeNullBytes() . Member Data Documentation ◆  CurrentPackIndex unsigned OutputBuffer::CurrentPackIndex = std::numeric_limits< unsigned >::max() If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. Definition at line 104 of file Utility.h . ◆  CurrentPackMax unsigned OutputBuffer::CurrentPackMax = std::numeric_limits< unsigned >::max() Definition at line 105 of file Utility.h . ◆  InsideTemplate bool OutputBuffer::InsideTemplate = false True if we're currently printing a template argument. Definition at line 113 of file Utility.h . ◆  ParenDepth unsigned OutputBuffer::ParenDepth = 0 The depth of '(' and ')' inside the currently printed template arguments. Definition at line 110 of file Utility.h . ◆  [struct] struct { ... } OutputBuffer::TemplateTracker Referenced by isInParensInTemplateArgs() , isInsideTemplateArgs() , printClose() , and printOpen() . The documentation for this class was generated from the following files: include/llvm/Demangle/ Utility.h include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://wiki.php.net/rfc/bcrypt_cost_2023?do=index
PHP: Sitemap Login Register You are here: start › rfc › bcrypt_cost_2023 rfc:bcrypt_cost_2023 Sitemap This is a sitemap over all available pages ordered by namespaces . adopt-code-of-conduct doc gsoc ideas indication_of_interest internals issuetracker licenses notrfc p pear pecl php-gtk playground pplusplus qa release rfc analysis apxs-loadmodule closures counterargument datetime_and_daylight_saving_time default_expression dvcs fpm functional-elements peclversioning php8 property-hooks propertygetsetsyntax-as-implemented releaseprocess remove_zend_api session-oo shortsyntaxforarrays socketactivation spl-improvements splclassloader strict_operators string-size_t true_async voting weakreferences 2d_matrix_operations 64bit-integer-type abolish-narrow-margins abolish-short-votes abstract_final_class abstract_syntax_tree abstract_trait_method_validation access_scope_from_magic_accessors add_bcdivmod_to_bcmath add_str_begin_and_end_functions add_str_starts_with_and_ends_with_functions add_validate_functions_to_filter add_values_method_to_backed_enum add-cms-support add-sha256-function adding_bcround_bcfloor_bcceil_to_bcmath additional_soft_reservations_for_php7 additional-context-in-pcntl-signal-handler additional-splat-usage adopt_pie_deprecate_pecl adopt-code-of-conduct adts aliases_by_reflection allow_casting_closures_into_single-method_interface_implementations allow_int_args_to_bcmath_function allow_multiple_simultaneous_syslog_connections allow_null allow_url_include allow-abstract-function-override allow-closures-to-declare-interfaces-they-implement allow-constant-override-consistently allow-constant-override allow-void-variance alpanumeric_decrement alternative_callback_syntax alternative-closure-use-syntax altmbstring always_enable_json annotations_v2 annotations-in-docblock annotations anonymous_catch anonymous_classes_v2 anonymous_classes any_all_on_iterable_straw_poll_namespace any_all_on_iterable_straw_poll any_all_on_iterable any_and_on_iterable apache_tail_request apprise_on_invalid_arithmetic_operands apxs-loadmodule arbitrary_expression_interpolation arbitrary_static_variable_initializers arbitrary_string_interpolation argon2_password_hash_enhancements argon2_password_hash argument_unpacking arithmetic_operator_type_checks array_change_keys array_column_results_grouping array_column array_count_handlers array_delete array_find array_first_last array_group array_key_first_last_index array_key_first_last array_part array_reindex array_unpacking_string_keys array-sort-return-array array-to-string arrayiterator-improvements arrayof arrow_function_preference arrow_functions_v2 arrow_functions assert-string-eval-cleanup assignment-overloading ast_based_parsing_compilation_process asymmetric-visibility-v2 asymmetric-visibility async_signals attribute_amendments attribute-target-reflector attributes_v2 attributes-on-constants attributes auto-capture-closure auto-capture-lambda auto-implement_stringable_for_string_backed_enums autoboxing autodefine autofunc autoload_classmap autoload_include automatic_csrf_protection automatic_get_set_methods automatic_property_initialization autovivification_false backslashnamespaces backwards_compatibility bare_name_array_dereference bare_name_array_literal bare_name_arrays base_convert_improvements base-convert basic_scalar_types batch_use_declarations bcrypt_cost_2023 better_benchmarks better_type_names_for_int64 bigint binary_string_comparison binary_string_deprecation binnotation4ints blake3_support blake3 boxingandunboxing build-openssl-by-default builtincrypt builtinwebserver cachediterable_straw_poll cachediterable callable-interfaces callable-types callable callableconstructors calls_in_constant_expressions_poll calls_in_constant_expressions calltimebyref case_insensitive_constant_deprecation case-sensitivity catchable-call-to-member-of-non-object chaining_comparison change_required_votes_to_two_thirds change_the_edge_case_of_round change-terminology-to-allowlist-and-blocklist change-terminology-to-excludelist changes_to_include_and_require check_and_set check-operator checkdnsrr-default-type chips clamp_v2 clamp class_and_interface_name_types class_casting_to_scalar class_const_visibility class_name_literal_on_object class_name_scalars class_properties_initialization class-like_primitive_types class-naming-acronyms class-naming clear-process cli_process_title cli_server_http2 cli-strict clone_with_v2 clone_with closure_apply closure_self_reference closurefromcallable closures_in_const_expr closures code_free_constructor code_optimizations coercive_sth collections combined-comparison-operator compact-object-property-assignment compact comparable comparator_interface complete_callstatc_magic comprehensions concatenation_precedence conditional_break_continue_return consistent_callables consistent_function_names consistent_type_errors consistent-names consolidate-coding-standard-policy-document const_scalar_expressions const_scalar_expressions2 const_scalar_exprs constant_redefinition constants_in_traits constdereference constructor_promotion constructor_return_type constructor-promotion container-offset-behaviour context_sensitive_lexer context-managers continue_ob continue_on_switch_deprecation convert_numeric_keys_in_object_array_casts cookie_max-age core-autoloading core-function-exceptions correctly_name_the_rounding_mode_and_make_it_an_enum counting_non_countables covariant-returns-and-contravariant-parameters create-split-alias-to-explode crypt_function_salt csprng_exceptions csrandombytes curl_http2_push curl_oop_v2 curl_setopt_strict_types curl_share_persistence_improvement curl_share_persistence curl_user_agent curl-file-upload curl-oop curl-url-api curl-wrappers-removal-rfc currying custom_object_serialization customfactories cyclic-replace data_encoding_api data-classes dataclass date_improvements date.timezone_warning_removal datetime_and_daylight_saving_time datetime_tostring datetime-exceptions datetime-tostring dbc dbc2 debug_backtrace_depth debug-info debugging_pdo_prepared_statement_emulation_v2 debugging_pdo_prepared_statement_emulation debugoptions declare_vars decode_html dedicated_stream_bucket default_ctor default_encoding default_expression default-session-strict-mode define-negative-execution-time delayedtargetvalidation_attribute deprecate_curly_braces_array_access deprecate_dollar_brace_string_interpolation deprecate_dynamic_properties deprecate_functions_with_overloaded_signatures deprecate_ini_set_get_aliases deprecate_mb_ereg_replace_eval_option deprecate_mcrypt_rand deprecate_null_to_scalar_internal_arg deprecate_partially_supported_callables deprecate_pdo_null deprecate_pear_recommend_composer deprecate_php_short_tags_v2 deprecate_php_short_tags deprecate_ticks deprecate-and-remove-ext-interbase deprecate-and-remove-ext-wddx deprecate-and-remove-intl_idna_variant_2003 deprecate-backtick-operator-v2 deprecate-backtick-operator deprecate-bareword-strings deprecate-boolean-string-coercion deprecate-function-bool-type-juggling deprecate-fuzzy-and-null-casts deprecate-fuzzy-casts deprecate-get-post-sessions deprecate-implicitly-nullable-types deprecate-inconsistent-cast-keywords deprecate-ini-functions deprecate-json_encode-nonserializable deprecate-pear-include-composer deprecate-png-jpeg-2wbmp deprecate-uniqid deprecate-unuseful-crypt-constants deprecated_attribute deprecated_traits deprecated-modifier deprecations_php_7_1 deprecations_php_7_2 deprecations_php_7_3 deprecations_php_7_4 deprecations_php_8_0 deprecations_php_8_1 deprecations_php_8_2 deprecations_php_8_3 deprecations_php_8_4 deprecations_php_8_5 deprecations_php_8_6 deque_straw_poll deque destructuring_coalesce direct-execution-opcode directory-opaque-object disallow-multiple-constructor-calls distrust-sha1-certificates dnf_types docblockparser dom_additions_84 dom_living_standard_api domdocument_html5_parser doxygen driver-specific-pdo-param-types drop_32bit_support drop_sql.safe_mode drop-datetimeinterface dtrace dvcs dvcsmigration dynamic_class_constant_fetch e-user-deprecated-warning easy_userland_csprng empty_function empty_isset_exprs encapsulation engine_exceptions_for_php7 engine_exceptions engine_warnings enhanced_error_handling enum_allow_static_properties enum_v2 enum enumerations_and_adts enumerations enumset eol-oniguruma error_backtraces_v2 error_backtraces error_handler_callback_parameters_passed_by_reference error_reporting_e_notice error-formatting-for-developers error-optimizations errors_as_exceptions escaper escaping_operator exception_bt_provide_object exception_ignore_args_default_value exit-as-function expectations experimental explicit_octal_notation explicit_send_by_ref extdep extended-string-types-for-pdo extension_exceptions extension_prepend_files extensions_load_order extensionsiberia fallback-to-root-scope-deprecation fast_zpp fcallfcall fcc_in_const_expr fetch_property_in_const_expressions ffi-non-static-deprecated ffi fiber fibers file-descriptor-function filter_throw_on_failure final_anonymous_classes final_by_default_anonymous_classes final_class_const final_promotion final_properties finally first_class_callable_syntax fix_list_behavior_inconsistency fix_up_bcmath_number_class flexible_heredoc_nowdoc_indentation flexible_heredoc_nowdoc_syntaxes forbid_dynamic_scope_introspection forbid_null_this_in_methods foreach_unwrap_ref foreach_void foreach-non-scalar-keys foreachlist fpm_change_hat fpm free-json-parser friend-classes fsync_function func_get_args_by_name function_alias function_autoloading_v2 function_autoloading function_autoloading2 function_autoloading4 function_referencing function-composition functional-elements functional-interfaces functionarraydereferencing functiongetentropy gc_fn_pointer gd_image_export_import_pixels generator-delegation generator-return-expressions generators generic-arrays generics get_class_disallow_null_parameter get_debug_type get_declared_enums get-error-exception-handler get-random github_issues github-pr glob_streamwrapper_support global_function_parser_directive global_login gmp_number gmp-final gmp-floating-point google_groups grapheme_add_locale_for_case_insensitive grapheme_levenshtein grapheme_str_split grisu3-strtod group_use_declarations guard_statement hash_pbkdf2 hash-context.as-resource hash-functions-empty-key-warning hash.context.oop hashkey heredoc-scanner-loosening heredoc-with-double-quotes hook_improvements horizontalreuse howto http-interface http-last-response-headers ifsetor image2wbmp imagettf_deprecation immutability implement_sqlite_openblob_in_pdo implement-strrstr-for-consistency implicit_move_optimisation implicit-float-int-deprecate improve_callbacks_dom_and_xsl improve_hash_hkdf_parameter improve_hash_hkdf_pramater improve_mysqli improve_predictable_prng_random improve_unserialize_error_handling improve-openssl-random-pseudo-bytes improved_error_callback_mechanism improved-parser-error-message improved-tls-constants improved-tls-defaults in_operator include_cleanup incompat_ctx inconsistent-behaviors increment_decrement_fixes indirect-method-call-by-array-var inheritance_private_methods instance_counter instance-method-call instanceof_improvements intdiv integer_semantics integer-rounding interface-default-methods internal_constructor_behaviour internal_function_return_types internal_method_return_types internal_serialize_api intersection_types intl_ubidi intl.char intl.charset-detector intl.timezone.get-windows-id intldatetimepatterngenerator introduce_design_by_contract introduce-type-affinity invalid_strings_in_arithmetic invokable invoke_destructors_during_bailout is_json is_list is_literal is_not_instanceof is_trusted is_valid_utf8 is-countable is-representable-as-float-int isreadable-iswriteable isset_ternary isset-set-operator iterable_to_array-and-iterable_count iterable-stdclass iterable iteration-tools iterator_chaining iterator_xyz_accept_array jenkins jit_config_defaults jit-ir jit json_encode_decode_errors json_encode_indentation json_numeric_as_string json_preserve_fractional_part json_schema_validation json_throw_on_error json_validate jsonable jsond jsonserializable karma-ml keywords_as_identifiers kill_real kill-csv-escaping language-constructs-syntax-changes lazy-objects ldap_controls ldap_exop ldap_modify_batch lemon lexical-anon libsodium linecontrol linking_in_stream_wrappers list_assoc_unique list_default_value list_keys list_reference_assignment list-syntax-trailing-commas literal_string load-ext-by-name local_variable_types locale_independent_float_to_string locked-classes logicalshiftoperator loop_default loop_else loop_or lsb_parentself_forwarding lsp_errors magic-methods-signature magicquotes_finale magicquotes make_ctor_ret_void make_opcache_required make_round_behave_correctly_as_float make-reflection-setaccessible-no-op managinglisttraffic marking_overriden_methods marking_return_value_as_important match_blocks match_expression_v2 match_expression max_execution_wall_time mb_levenshtein mb_str_pad mb_str_split mb_trim_change_characters mb_trim mb_ucfirst mcrypt-viking-funeral minor-version-compatibility misc_variable_functions mixed_type_v2 mixed_vs_untyped_properties mixed-typehint mixin modern_compression moduleapi-inspection multibyte_char_handling multiple-catch my_rfc mysql_deprecation mysqli_bind_in_execute mysqli_default_errmode mysqli_execute_parameters mysqli_execute_query mysqli_fetch_column mysqli_quote_string mysqli_support_for_libmysql mysqlnd_localhost_override named_parameter_alias_attribute named_params namedparameters nameof namespace_prefix_visibility namespace_scoped_declares namespace_visibility namespace-importing-with-from namespace-visibility namespacecurlies namespaced_names_as_token namespaceissues namespaceref namespaceresolution namespaces_encapsulation namespaces_in_bundled_extensions namespaces-for-internal-classes namespaces-in-core namespaces namespaceseparator native_regular_expressions native-tls negative_array_index negative-index-support negative-string-offsets nested_classes never_for_parameter_types never-parameters-v2 new_in_initializers new_rounding_modes_to_round_function new_without_parentheses new-curl-error-functions new-output-api newinis nikita_popov no_serialize_attribute non_coercing_array_keys_in_strict_mode non_nullable_property_checks non-capturing_catches nonbreakabletraits nophptags noreturn_type normalize_inc_dec normalize-array-auto-increment-on-copy-on-write not_serializable_attribute not_serializable notice-for-non-valid-array-container null_coalesce_equal_operator null_coercion_consistency null-false-standalone-types null-propagation null-standalone-type nullable_intersection_types nullable_return_types nullable_returns nullable_typehints nullable_types nullable-casting nullable-not-nullable-cast-operator nullsafe_calls nullsafe_operator num_available_processors number_format_negative_zero number_format_separator numeric_literal_separator object_cast_magic object_cast_to_types object_keys_in_arrays object_scope_prng object-comparison object-initializer object-model-improvements object-typehint objectarrayliterals objects-can-be-falsifiable objkey octal.overload-checking ommit-double-slash-in-user-stream-wrapper-uri on_demand_name_mangling opcache.no_cache open_release_manifest openssl_aead openssl.bignum operator_functions operator_overloading_gmp operator_overrides_lite operator-overloading opt_in_dom_spec_compliance optimizerplus optin_block_scoping optional_constructor_body optional-interfaces optional-t-function override_properties p-plus-plus pack_unpack_64bit_formats pack-unpack-endianness-signed-integers-support parameter_type_casting_hints parameter-no-type-variance parse_request_body_in_json parse_str_alternative parser-extension-api partial_function_application_v2 partial_function_application partially-supported-callables-expand-deprecation-notices pass_scope_to_magic_accessors password_hash_spec password_hash password_registry pattern-matching pcre2-migration pdo_default_errmode pdo_disconnect pdo_driver_specific_parsers pdo_driver_specific_subclasses pdo_escape_placeholders pdo_float_type pdo-mysql-get-warning-count pdonotices pdov1 pecl_http peclversioning performanceimprovements permanent_hash_ext phar_stop_autoloading_metadata phase_out_serializable php_engine_constant php_ini_bcmath_default php_license_update php_namespace_policy php_native_interface php_technical_committee php-array-api php-namespace-in-core php6-rethink php6 php7_57_roadmap php7_foreach php7timeline php8_assertions php8 php53eol php56timeline php57 php71-crypto phpdbg phpnet-analytics phpng phpng64 phpp phpvcs pickle pipe-operator-v2 pipe-operator-v3 pipe-operator platform_requirement_declares policy-release-process-update policy-repository poll_api poll_switch_expression pow-operator precise_float_value precise_session_management preg_extract preg_replace_callback_array preload prevent_disruptions_of_conversations println private-classes-and-functions process_object_name promote_php_foundation proper-range-semantics property_accessors property_type_hints property_write_visibility property-capture property-hooks propertygetsetsyntax-alternative-typehinting-syntax propertygetsetsyntax-as-implemented propertygetsetsyntax-implementation-details propertygetsetsyntax-v1.1 propertygetsetsyntax-v1.2 propertygetsetsyntax protectedlookup protocol_type_hinting prototype_checks prototypecasting pure-intersection-types raising_zero_to_power_of_negative_number random_ext random_extension_improvement random_migration random-function-exceptions randomizer_additions range_checks_for_64_bit raw-identifiers readable_var_representation readline_interactive_shell_result_function_straw_poll readline_interactive_shell_result_function readonly_amendments readonly_and_immutable_properties readonly_classes readonly_hooks readonly_properties_v2 readonly_properties reclassify_e_strict records redact_parameters_in_back_traces reference_reflection reflection_doccomment_annotations reflectionparameter-getclassname reflectionparameter.typehint reflectiontypeimprovements release_cycle_update release-md5-deprecation releaseprocess releaseprocessalternatives removal_of_dead_sapis_and_exts removal_of_dead_sapis removal-of-deprecated-features remove_alternative_php_tags remove_deprecated_functionality_in_php7 remove_disable_classes remove_hex_support_in_numeric_strings remove_object_auto_vivification remove_php4_constructors remove_preg_replace_eval_modifier remove_re2c_generated_files remove_utf_8_decode_encode remove_utf8_decode_and_utf8_encode remove_zend_api rename-double-colon-token renamed_parameters replace_parse_url request_response request-tempnam reserve_even_more_types_in_php_7 reserve_keywords_in_php_8 reserve_more_types_in_php_7 reserve_primitives resolve_symlinks resource_to_object_conversion resource_typehint restrict_globals_usage retry-keyword return_break_continue_expressions return_types returntypehint returntypehint2 returntypehinting review-discussion-period revisit-trailing-comma-function-args rfc_discussion_and_vote rfc_vote_abstain rfc.third-party-editing rfc.voting-threshold rfc1867-non-post ripples rng_extension rng_fixes rounding runtimecache safe_cast same-site-cookie same-site-parameter saner-array-sum-product saner-inc-dec-operators saner-numeric-strings scalar_extensions scalar_type_hinting_with_cast scalar_type_hints_v_0_1 scalar_type_hints_v5 scalar_type_hints scalar-pseudo-type script_only_include sealed_classes second_arg_to_preg_callback secure_serialization secure_unserialize secure-html-escape secure-session-options-by-default security-classification sendrecvmsg session_regenerate_id session_upload_progress session-create-id session-gc session-id-without-hashing session-lock-ini session-oo session-read_only-lazy_write session-use-strict-mode session.user.return-value shell_exec_result_code_param short_closures short_list_syntax short_ternary_equal_operator short-and-inner-classes short-closures short-functions short-match short-syntax-for-anonymous-function short-syntax-for-anonymous-functions short-syntax-for-anonymus-functions shortags shorter_attribute_syntax_change shorter_attribute_syntax shortsyntaxforarrays shortsyntaxforfunctions simple-annotations simplified_named_params single-expression-functions site_voting_poll size_t_and_int64_next size_t_and_int64 skipparams sleep_function_float_support sleep_without_return_array slim_post_data small_features soap_get_location socket_getaddrinfo socketactivation sodium.argon.hash soft-deprecate-sleep-wakeup sort_strict sorting_enum source_files_without_opening_tag sourcemaps spl-improvements spl-namespace splclassloader splweaktypehintingwithautoboxing spread_operator_for_array sql_injection_protection sqlite3_exceptions stable_sorting stack-frame-class stackable_error_handler static_class_constructor static_class static_constructor static_return_type static_variable_inheritance static-aviz static-classes stochastic_rounding_mode str_contains str_icontains stream_errors streamline-phar-api streammetadata streams-is-cacheable streamwrapper-factory strict_argcount strict_operators strict_return_types strict_sessions stricter_implicit_boolean_coercions string_to_number_comparison string-bitwise-shifts stringable-enums stringable strncmpnegativelen strtolower-ascii structs-v2 structs structural-typing-for-closures structured_object_notation support_object_type_in_bcmath suppressed_exceptions svnexternals switch_expression switch-expression-and-statement-improvement switch-expression switch.default.multiple sync syntax-to-capture-variables-when-declaring-anonymous-classes tagged_unions taint template tempnam-suffix-v2 tempnam-suffix ternary_associativity the_naming_convention_for_internal_functions_arguments third-party-code this_return_type this_var throw_error_in_extensions throw_expression throwable_string_param_max_len throwable-code-generalization throwable-interface throwable tidyexception-for-tidy timing_attack timing_safe_encoding tls_session_resumption_api tls-peer-verification tls to-array token_as_object token-get-always-tokens too_few_args tostring_exceptions trailing_comma_in_closure_use_list trailing_comma_in_parameter_list trailing_whitespace_numerics trailing-comma-function-args trailing-comma-function-calls trailing-commas-function-calls traits-with-interfaces traits traitsmodifications travis_ci treat_enum_instances_as_values trim_form_feed true_async_engine_api true_async_scope true_async true-nested-function-support true-type tsrmls-fetch-reduction typecast_array_desctructuring typechecking typecheckingparseronly typecheckingstrictandweak typecheckingstrictonly typecheckingweak typed_class_constants typed_constants typed_properties_v2 typed-aliases typed-properties-v2 typed-properties typedef typehint_array_desctructuring typehint typesafe-callable uconverter umaintained_extensions unary_null_coalescing_operator unbundle_imap_pspell_oci8 unbundle_recode unbundle_xmlprc unbunle-unmaintained-extensions-php8 undefined_property_error_promotion undefined_variable_error_promotion undeterministic_exceptions unicode_escape unicode_text_processing unified-crypto-source uniform_variable_syntax union_types_v2 union_types uniqid unserialize_warn_on_trailing_data unset_bool uri_followup url_dots url_parsing_api url-opcode-cache use_function use_global_elements use-from use-php_mt_rand use-static-function useas user_defined_operator_overloads user_defined_session_serializer userspace_operator_overloading ustring uuid var_deprecation var_info var_type var-export-array-syntax variable_syntax_tweaks variadic_empty variadics vector void_return_type void-as-null voting_who voting voting2017 voting2019 warn-resource-to-string warnings-php-8-5 wddx-deprecate-class-instance-deserialization weak_maps weakreferences weakrefs web-and-doc-use-not-endorsement who_can_vote working_groups working_with_substrings write_once_properties xml_option_parse_huge xmlreader_writer_streams yescrypt zend-vm-pause-api zendsignals zpp_fail_on_overflow zpp_improv zpp-conversion-rules summits systems sytems todo user usergroups vcs web wiki wiki.php.net canyouvote conferences corementorship cve doc email_etiquette_for_people_new_to_php_internals extensions-unmaintained gitstats_02_19 gitstats_09_17 gsoc ideas indication_of_interest internals issuetracker licenses pear pecl php-7.1-ideas php-gtk phpng-int phpng-upgrading phpng platforms playground qa redefine_constants_exception_strawpoll rfc-index rfc security_fixes security start summits svnmigration systems temporary_location_for_draft_documentation todo usergroups vcs voting web xfail_poll zts-improvement rfc/bcrypt_cost_2023.txt · Last modified: 2025/04/03 13:08 by 127.0.0.1 Page Tools Show page Old revisions Backlinks Back to top  Copyright © 2001-2026 The PHP Group Other PHP.net sites Privacy policy
2026-01-13T09:30:39
https://vi-vn.facebook.com/r.php?next=https%3A%2F%2Fwww.facebook.com%2Fshare_channel%2F%3Ftype%3Dreshare%26link%3Dhttps%253A%252F%252Fdev.to%252Ftatyanabayramova%252Fglaucoma-awareness-month-363o%26app_id%3D966242223397117%26source_surface%3Dexternal_reshare%26display%26hashtag&amp%3Bamp%3Bamp%3Bamp%3Blocale=id_ID&amp%3Bamp%3Bamp%3Bamp%3Bdisplay=page&amp%3Bamp%3Bamp%3Bamp%3Bentry_point=login
Facebook Facebook Email hoặc điện thoại Mật khẩu Bạn quên tài khoản ư? Đăng ký Bạn tạm thời bị chặn Bạn tạm thời bị chặn Có vẻ như bạn đang dùng nhầm tính năng này do sử dụng quá nhanh. Bạn tạm thời đã bị chặn sử dụng nó. Back Tiếng Việt 한국어 English (US) Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch Đăng ký Đăng nhập Messenger Facebook Lite Video Meta Pay Cửa hàng trên Meta Meta Quest Ray-Ban Meta Meta AI Nội dung khác do Meta AI tạo Instagram Threads Trung tâm thông tin bỏ phiếu Chính sách quyền riêng tư Trung tâm quyền riêng tư Giới thiệu Tạo quảng cáo Tạo Trang Nhà phát triển Tuyển dụng Cookie Lựa chọn quảng cáo Điều khoản Trợ giúp Tải thông tin liên hệ lên & đối tượng không phải người dùng Cài đặt Nhật ký hoạt động Meta © 2026
2026-01-13T09:30:39
https://github.com/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb
python_for_visres/Part3/Part3_Scientific_Python.ipynb at master · gestaltrevision/python_for_visres · GitHub Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Platform AI CODE CREATION GitHub Copilot Write better code with AI GitHub Spark Build and deploy intelligent apps GitHub Models Manage and compare prompts MCP Registry New Integrate external tools DEVELOPER WORKFLOWS Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes APPLICATION SECURITY GitHub Advanced Security Find and fix vulnerabilities Code security Secure your code as you build Secret protection Stop leaks before they start EXPLORE Why GitHub Documentation Blog Changelog Marketplace View all features Solutions BY COMPANY SIZE Enterprises Small and medium teams Startups Nonprofits BY USE CASE App Modernization DevSecOps DevOps CI/CD View all use cases BY INDUSTRY Healthcare Financial services Manufacturing Government View all industries View all solutions Resources EXPLORE BY TOPIC AI Software Development DevOps Security View all topics EXPLORE BY TYPE Customer stories Events & webinars Ebooks & reports Business insights GitHub Skills SUPPORT & SERVICES Documentation Customer support Community forum Trust center Partners Open Source COMMUNITY GitHub Sponsors Fund open source developers PROGRAMS Security Lab Maintainer Community Accelerator Archive Program REPOSITORIES Topics Trending Collections Enterprise ENTERPRISE SOLUTIONS Enterprise platform AI-powered developer platform AVAILABLE ADD-ONS GitHub Advanced Security Enterprise-grade security features Copilot for Business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... --> Search Clear Search syntax tips Provide feedback --> We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly --> Name Query To see all available qualifiers, see our documentation . Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} gestaltrevision / python_for_visres Public Notifications You must be signed in to change notification settings Fork 40 Star 57 Code Issues 0 Pull requests 0 Actions Projects 0 Wiki Security Uh oh! There was an error while loading. Please reload this page . Insights Additional navigation options Code Issues Pull requests Actions Projects Wiki Security Insights Footer © 2026 GitHub, Inc. Footer navigation Terms Privacy Security Status Community Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time.
2026-01-13T09:30:39
https://aws.amazon.com/de/sqs/#aws-page-content-main
Vollständig verwaltete Nachrichten-Warteschlangen – Amazon Simple Queue Service – Amazon Web Services Überspringen zum Hauptinhalt Filter: Alle English Kontaktieren Sie uns AWS Marketplace Support Mein Konto Suche Filter: Alle Bei der Konsole anmelden Konto erstellen Amazon SQS Übersicht Features Preise Erste Schritte Ressourcen Mehr Produkte › Anwendungsintegration › Amazon Simple Queue Service Mit dem kostenlosen AWS-Kontingent 1 Million Anforderungen kostenlos erhalten Amazon Simple Queue Service Vollständig verwaltete Nachrichtenwarteschlangen für Microservices, verteilte Systeme und Serverless-Anwendungen Kostenloser Einstieg Anwendungsfälle für Amazon SQS Erfahren Sie, wie first-in-first-out (FIFO)-Warteschlangen dazu beitragen, dass die Nachrichten, die Sie an Systeme senden, in der richtigen Reihenfolge veröffentlicht werden. Einführung in Amazon SQS FIFO-Warteschlangen (2:04) Abspielen Vorteile von Amazon SQS Gemeinkosten leicht gemacht Eliminieren Sie Gemeinkosten ohne Vorlaufkosten und ohne Software verwalten oder Infrastruktur warten zu müssen. Zuverlässigkeit im großen Maßstab Liefern Sie zuverlässig große Datenmengen bei jedem Durchsatz, ohne dass Nachrichten verloren gehen oder andere Services verfügbar sein müssen. Sicherheit Senden Sie sensible Daten sicher zwischen Anwendungen und verwalten Sie Ihre Schlüssel zentral mit AWS Key Management. Kostengünstige Skalierbarkeit Skalieren Sie elastisch und kosteneffizient auf Basis der Nutzung, so dass Sie sich keine Gedanken über Kapazitätsplanung und -vorhaltung machen müssen. Funktionsweise Mit Amazon Simple Queue Service (SQS) können Sie Nachrichten zwischen Softwarekomponenten in beliebiger Zahl senden, speichern und empfangen – ohne dass Nachrichten verloren gehen oder andere Services verfügbar sein müssen. Anwendungsfälle Erhöhen Sie die Zuverlässigkeit und Skalierbarkeit der Anwendung Amazon SQS bietet Kunden eine einfache und zuverlässige Möglichkeit, Komponenten (Microservices) mithilfe von Warteschlangen zu entkoppeln und miteinander zu verbinden. Entkoppeln Sie Microservices und verarbeiten Sie ereignisgesteuerte Anwendungen Trennen Sie Frontend- und Backend-Systeme, z. B. in einer Banking-Anwendung. Kunden erhalten sofort eine Antwort, die Rechnungszahlungen werden jedoch im Hintergrund verarbeitet. Stellen Sie eine kostengünstige und pünktliche Fertigstellung der Arbeit sicher Platzieren Sie Arbeit in einer einzelnen Warteschlange, in der mehrere Worker in einer Auto-Scaling-Gruppe je nach Workload und Latenzanforderungen hoch- oder herunterskalieren. Behalten Sie die Nachrichtenreihenfolge mit Deduplizierung bei Verarbeiten Sie Nachrichten in großem Umfang und behalten Sie dabei die Reihenfolge der Nachrichten bei, so dass Sie Nachrichten deduplizieren können. Erste Schritte mit Amazon SQS In der Amazon SQS-Konsole anmelden Anmelden Erstellen einer Amazon SQS-Warteschlange Weitere Informationen Amazon-SQS-Funktionen kennenlernen Mehr erkunden Haben Sie die gewünschten Informationen gefunden? Ihr Beitrag hilft uns, die Qualität der Inhalte auf unseren Seiten zu verbessern. Ja Nein AWS-Konto erstellen Lernen Was ist AWS? Was ist „Cloud Computing“? Was ist „Agentenbasierte KI“? Hub für Cloud-Computing-Konzepte AWS Cloud Sicherheit Neuerungen Blogs Pressemitteilungen Ressourcen Erste Schritte Training AWS Trust Center AWS-Lösungsportfolio Architekturzentrum Häufig gestellte Fragen zu Produkt und Technik Berichte von Analysten AWS-Partner Entwickler Builder Center SDKs und Tools .NET auf AWS Python in AWS Java in AWS PHP in AWS JavaScript in AWS Hilfe Kontaktieren Sie uns Support-Ticket aufgeben AWS re:Post Wissenscenter AWS Support – Überblick Erhalten Sie Hilfe von Experten Barrierefreiheit bei AWS Rechtlicher Hinweis English Zurück zum Seitenanfang Amazon.com setzt als Arbeitgeber auf Gleichberechtigung: Minderheiten/Frauen/Menschen mit Behinderungen/Veteranen/Geschlechtsidentität/sexuelle Orientierung/Alter. x facebook linkedin instagram twitch youtube podcasts email Datenschutz Allgemeine Geschäftsbedingungen Cookie-Einstellungen © 2026, Amazon Web Services, Inc. bzw. Tochtergesellschaften des Unternehmens. Alle Rechte vorbehalten.
2026-01-13T09:30:39
https://github.com/php/php-src/pull/105
Create hash_pbkdf2 function addition by ircmaxell · Pull Request #105 · php/php-src · GitHub Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Platform AI CODE CREATION GitHub Copilot Write better code with AI GitHub Spark Build and deploy intelligent apps GitHub Models Manage and compare prompts MCP Registry New Integrate external tools DEVELOPER WORKFLOWS Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes APPLICATION SECURITY GitHub Advanced Security Find and fix vulnerabilities Code security Secure your code as you build Secret protection Stop leaks before they start EXPLORE Why GitHub Documentation Blog Changelog Marketplace View all features Solutions BY COMPANY SIZE Enterprises Small and medium teams Startups Nonprofits BY USE CASE App Modernization DevSecOps DevOps CI/CD View all use cases BY INDUSTRY Healthcare Financial services Manufacturing Government View all industries View all solutions Resources EXPLORE BY TOPIC AI Software Development DevOps Security View all topics EXPLORE BY TYPE Customer stories Events & webinars Ebooks & reports Business insights GitHub Skills SUPPORT & SERVICES Documentation Customer support Community forum Trust center Partners Open Source COMMUNITY GitHub Sponsors Fund open source developers PROGRAMS Security Lab Maintainer Community Accelerator Archive Program REPOSITORIES Topics Trending Collections Enterprise ENTERPRISE SOLUTIONS Enterprise platform AI-powered developer platform AVAILABLE ADD-ONS GitHub Advanced Security Enterprise-grade security features Copilot for Business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... --> Search Clear Search syntax tips Provide feedback --> We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly --> Name Query To see all available qualifiers, see our documentation . Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} php / php-src Public Notifications You must be signed in to change notification settings Fork 8k Star 39.8k Code Issues 929 Pull requests 766 Actions Security Uh oh! There was an error while loading. Please reload this page . Insights Additional navigation options Code Issues Pull requests Actions Security Insights Create hash_pbkdf2 function addition #105 New issue Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community. Sign up for GitHub By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement . We’ll occasionally send you account related emails. Already on GitHub? Sign in to your account Jump to bottom Merged php-pulls merged 8 commits into php : master from ircmaxell : hash_pbkdf2 Jul 10, 2012 Merged Create hash_pbkdf2 function addition #105 php-pulls merged 8 commits into php : master from ircmaxell : hash_pbkdf2 Jul 10, 2012 Conversation 10 Commits 8 Checks 0 Files changed Uh oh! There was an error while loading. Please reload this page . Conversation This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters. Learn more about bidirectional Unicode characters Show hidden characters Copy link Contributor ircmaxell commented Jun 12, 2012 This pull request adds a new function to the hash package: hash_pbkdf2() , providing the ability to natively hash content using the PKCS5 approved PBKDF2 algorithm. See Wikipedia and RSA for more information about the algorithm. This patch refactors the internal implementation of hash_hmac() to allow code reuse between it and the new hash_pbkdf2() function. No internal APIs were changed, and the only API addition is the PHP function hash_pbkdf2() . A few static inline functions were either added, or extracted from the inside of hash_hmac and its implementation. These refactorings should have no public impact, since they are static to the extension. --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . --> All reactions Create hash_pbkdf2 function addition 6387498 nikic reviewed Jun 12, 2012 View reviewed changes NEWS Outdated Copy link Member nikic Jun 12, 2012 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . --> Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment The F looks strange :) Also I'm missing your name there :) --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . --> All reactions Update NEWS to fix typo, add name 550253f nikic reviewed Jun 12, 2012 View reviewed changes ext/hash/hash.c Outdated Copy link Member nikic Jun 12, 2012 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . --> Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment Are the two unsigned char * cast necessary here? Look to me like they are already declared to be of that type. --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . --> All reactions Copy link Contributor Author ircmaxell commented Jun 12, 2012 @scottmac As far as that's concerned, I'm not sure how that would look. The API between PBKDF2 and SCrypt are very different. One takes a hash algorithm, a key (password), a salt, an iteration count, and a length. The other takes a key (password), a salt, and 3 integer parameters: N, p, r. So without having a generic "options" array (which I wouldn't care for). Instead, why not add a second function: hash_scrypt() ? --> All reactions --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . nikic reviewed Jun 12, 2012 View reviewed changes ext/hash/hash.c Outdated Copy link Member nikic Jun 12, 2012 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . --> Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment This and the previous error message Have All Words Capitalized. PHP doesn't usually make use of that scheme :) --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . --> All reactions ircmaxell added 2 commits June 12, 2012 14:09 refactor away un-necessary casts in hashing routines 4918acc Update error messages to be more inline with PHP standards df3d351 Copy link Contributor Author ircmaxell commented Jun 12, 2012 @nikic Thanks! I've updated all of the issues you've identified. The reason for the casts, was that before the refactor it was needed, but I never changed the argument after I extracted the method. You are correct that the second memset is not needed. I've removed that as well. I'll push the changes once make test passes to ensure I didn't bork anything. Thanks! --> All reactions --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . Remove un-needed memset, and replacing stray spaces 43eb8dc nikic reviewed Jun 12, 2012 View reviewed changes ext/hash/hash.c Outdated Copy link Member nikic Jun 12, 2012 There was a problem hiding this comment. Choose a reason for hiding this comment The reason will be displayed to describe this comment to others. Learn more . --> Choose a reason Spam Abuse Off Topic Outdated Duplicate Resolved Hide comment nit: potentiall should probably be potentially --> Sorry, something went wrong. Uh oh! There was an error while loading. Please reload this page . --> All reactions ircmaxell added 3 commits June 12, 2012 14:52 Fix tests to use proper casing 2f1cd2c More cleanup of documentation and comments, as well as code formatting 03536e8 Merge remote branch 'upstream/master' into hash_pbkdf2 … 731c6fd * upstream/master: (101 commits) Fixed Bug #62500 (Segfault in DateInterval class when extended) Fixed test bug #62312 (warnings changed one more time) fix valgrind warning fix valgrind warning fixed #62433 test for win update NEWS Fixed bug #62499 (curl_setopt($ch, CURLOPT_COOKIEFILE, "") returns false) appease MSVC (doesnt like unary minus of unsigned ints) appease MSVC (doesnt like unary minus of unsigned ints) appease MSVC (doesnt like unary minus of unsigned ints) - Fixed bug #62507 (['REQUEST_TIME'] under mod_php5 returns miliseconds instead of seconds) Fixed Bug #62500 (Segfault in DateInterval class when extended) Added in NEWS and UPGRADING for feature 55218 Fix two issues with run-tests.php Fix potential integer overflow in nl2br Fix potential integer overflow in bin2hex This wil be PHP 5.3.16 Revert change 3f3ad30 : There shouldn't be new features in 5.3, especially not if they aren't in 5.4, too. fix (signed) integer overflow (part of bug #52550 fix (signed) integer overflow (part of bug #52550 ... php-pulls merged commit 731c6fd into php : master Jul 10, 2012 dunglas mentioned this pull request Sep 17, 2023 Segmentation fault when building Symfony cache on Alpine #12234 Closed --> Sign up for free to join this conversation on GitHub . Already have an account? Sign in to comment --> Reviewers No reviews --> Assignees No one assigned Labels None yet --> Projects None yet --> Milestone No milestone --> Development Successfully merging this pull request may close these issues. Uh oh! There was an error while loading. Please reload this page . 4 participants Add this suggestion to a batch that can be applied as a single commit. This suggestion is invalid because no changes were made to the code. Suggestions cannot be applied while the pull request is closed. Suggestions cannot be applied while viewing a subset of changes. Only one suggestion per line can be applied in a batch. Add this suggestion to a batch that can be applied as a single commit. Applying suggestions on deleted lines is not supported. You must change the existing code in this line in order to create a valid suggestion. Outdated suggestions cannot be applied. This suggestion has been applied or marked resolved. Suggestions cannot be applied from pending reviews. Suggestions cannot be applied on multi-line comments. Suggestions cannot be applied while the pull request is queued to merge. Suggestion cannot be applied right now. Please check back later. Footer © 2026 GitHub, Inc. Footer navigation Terms Privacy Security Status Community Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time.
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/world/
TIME for Kids | World | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit World Technology Robot Rumble August 14, 2025 Robots compete in a boxing match at the World Robot Expo, in Beijing, China. The expo took place from August 8 to 12. It showcased the latest in robotics, inclu… Audio Technology Australia Logs Off December 5, 2024 Australia will bar kids under 16 from using social media, according to a law that passed the Australian Parliament on November 29. Under the law, social media platforms such as TikTok and Facebook will have to pay fines of up… Audio History Pictures of 2023 December 7, 2023 This year, we planned a moon mission. Sports teams celebrated victories, and a king was crowned. In many ways, 2023 looked like years past. But look closer and you’ll see that familiar events can be extraordinary. NASA’s Artemis crew is… Audio Spanish Environment New Ocean Treaty March 9, 2023 On March 4, United Nations (U.N.) member countries agreed on a treaty that will protect marine life in the high seas. These are ocean waters outside of all national boundaries. The treaty will protect 30% of the world’s oceans–nearly half… Audio History A Message From the Past January 19, 2023 On January 17, archaeologists in Norway said they had found the world’s oldest rune stone. The flat rock is etched with runes. These are letters from an ancient Scandinavian language. They could have been carved 2,000 years ago. “This may… Audio Environment Good Growing January 19, 2023 Biologist Paula Costa is standing in a bare field of red dirt in the São Paulo State, in Brazil. Five hundred years ago, this land was part of a rainforest called the Mata Atlântica. Today, nearly all of the forest… Audio Spanish Environment Prizewinners December 9, 2022 The second annual Earthshot Prize Awards were given out on December 2, in Boston, Massachusetts. The Earthshot charity was founded by Prince William, of the United Kingdom (U.K.). It awards $1.2 million to each of five projects that address an… Audio History Pictures of 2022 December 9, 2022 What a year 2022 has been! It was a year of firsts: NASA’s Artemis rocket launched a new era in space travel. And the Webb telescope helped us see farther into the universe than ever before. But 2022 also had… Audio Spanish Arts Two in 1 October 20, 2022 A visitor at Frieze London, an art fair in England, poses with Giant Pumpkin No. 1 on October 13. The work is by British artist Anthea Hamilton. Audio History A Big Celebration June 8, 2022 This weekend, Britain’s Queen Elizabeth II celebrated her Platinum Jubilee. That means she has been queen for 70 years. She and her 4-year-old great-grandson, Prince Louis of Cambridge, watched a noisy flyby of jets. They formed a 70 in the… Audio Posts pagination 1 2 3 4 5 … 7 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/dir_1e2e17d5f5a91829a3b72d88a3911a9a.html
LLVM: include/llvm/ABI Directory Reference LLVM  22.0.0git include llvm ABI ABI Directory Reference Directory dependency graph for ABI: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. Files   Types.h   This file defines the type system for the LLVMABI library, which mirrors ABI-relevant aspects of frontend types. Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
http://eigenhombre.com/tdd-rdd-and-ddd.html
TDD, RDD and DDD TDD, RDD and DDD John Jacobsen This is the third post in a series about my current Clojure workflow. Having discussed Emacs setup for Clojure, I now present a sort of idealized workflow, in which I supplement traditional TDD with literate programming and REPL experimentation. First, some questions: How do you preserve the ability to make improvements without fear of breaking things? What process best facilitates careful thinking about design and implementation? How do you communicate your intentions to future maintainers (including future versions of yourself)? How do you experiment with potential approaches and solve low-level tactical problems as quickly as possible? Since “simplicity is a prerequisite for reliability” (Dijkstra), how do you arrive at simple designs and implementations? The answer to (a) is, of course, by having good tests; and the best way I know of to maintain good tests is by writing test code along with, and slightly ahead of, the production code (test-driven development, a.k.a. TDD). However, my experience with TDD is that it doesn’t always help much with the other points on the list (though it helps a bit with (b), (c) and (e)). In particular, Rich Hickey points out that TDD is not a substitute for thinking about the problem at hand. As an aid for thinking, I find writing to be invaluable, so a minimal sort of literate programming has become a part of my workflow, at least for hard problems. The Workflow Now for the workflow proper. Given the following tools: Emacs + Cider REPL Unit tests running continuously Marginalia running continuously, via conttest , then my workflow, in its Platonic Form , is: Is the path forward clear enough to write the next failing test? If not, go to step 2. If it is, go to step 3. Think and write (see below) about the problem. Go to 1. Write the next failing test . This test, when made to pass, should represent the smallest “natural” increase of functionality. Is it clear how to make the test pass? If not, go to step 5. If it is, write the simplest “production” code which makes all tests pass. Go to 6. Think, write, and conduct REPL experiments . Go to 4. Is the code as clean, clear, and simple as possible? If so, go to step 7. If not, refactor, continuously making sure tests continue to pass with every change. Go to 6. Review the Marginalia docs. Is the intent of the code clear? If so, go to step 1. If not, write some more . Go to 7. “Writing” in each case above refers to updating comments and docstrings, as described in a subsequent post on Literate Programming. Here are the above steps as a flow chart: The workflow presented above is a somewhat idealized version of what I actually manage to pull off during any given coding session. It is essentially the red - green - refactor of traditional test-driven development, with the explicit addition of REPL experimentation (“REPL-driven development,” or RDD) and continuous writing of documentation (“documentation-driven development,” or DDD) as a way of steering the effort and clarifying the approach. The utility of the REPL needs no elaboration to Clojure enthusiasts and I won’t belabor the point here. Furthermore, a lot has been written about test-first development and its advantages or drawbacks. At the moment, the practice seems to be particularly controversial in the Rails community. I don’t want to go too deep into the pros and cons of TDD other than to say once again that the practice has saved my bacon so many times that I try to minimize the amount of code I write that doesn’t begin life as a response to a failing test. What I want to emphasize here is how writing and the use of the REPL complement TDD. These three ingredients cover all the bases (a)-(e), above. While I’ve been combining unit tests and the REPL for some time, the emphasis on writing is new to me, and I am excited about it. Much more than coding by itself, I find that writing things down and building small narratives of code and prose together forces me to do the thinking I need in order to write the best code I can. Always Beginning Again While I don’t always follow each of the above steps to the letter, the harder the problem, the more closely I will tend to follow this plan, with one further modification: I am willing to wipe the slate clean and begin again if new understanding shows that the current path is unworkable, or leads to unneeded complexity. The next few posts attack specifics about testing and writing , presenting what I personally have found most effective (so far), and elaborating on helpful aspects of each. about | all posts © 2016 John Jacobsen . Created with unmark . CSS by Tufte-CSS .
2026-01-13T09:30:39
http://eigenhombre.com/macro-writing-macros.html
Macro-writing Macros Macro-writing Macros John Jacobsen ... in which we explore the power of macros, and macro-writing macros, to DRY out repetitive code. I’ve been writing Clojure full time for nearly two years now. I have a pretty good feel for the language, its virtues and its faults. Mostly, I appreciate its virtues (though I still wish the REPL started faster). For me one of the language’s attractions has always been that it’s a Lisp — a “homoiconic” language, i.e., one defined in terms of its own data structures. Homoiconicity has one primary virtue, which is that it makes metaprogramming more powerful and straightforward than it is in non-homoiconic languages (arguably at some cost to readability). In Lisp, this metaprogramming is accomplished with macros , which are functions that transform your code during a separate stage of compilation. In other words, you write little programs to change your programs before they execute. In effect, you extend the compiler itself. I run a Clojure study group at work and find that it can be hard to explain the utility (or appeal) of this to newcomers to Lisp. This is partly because macros do things you can’t easily do in other languages, and because the things you want to do tend to relate to abstractions latent in a particular codebase. While playing around with 3d rendering in Quil , I recently came across a use case that reminded me of the following quote by Paul Graham: The shape of a program should reflect only the problem it needs to solve. Any other regularity in the code is a sign, to me at least, that I’m using abstractions that aren’t powerful enough— often that I’m generating by hand the expansions of some macro that I need to write Paul Graham, Revenge of the Nerds , http://www.paulgraham.com/icad.html . In Quil, there are multiple situations in which one needs to create a temporary context to carry out a series of operations, restoring the original state afterwards: Save current style with push-style ; change style and draw stuff; restore previous style with pop-style . Start shape with begin-shape ; draw vertices; end-shape to end. Save current position/rotation with push-matrix ; translate / rotate and draw stuff; restore old position/rotation with pop-matrix . Here’s an example: (push-matrix) (try (push-style) (try (fill 255) (no-stroke) (translate [10 10 10]) (begin-shape) (try (vertex x1 y1 0) (vertex x2 y2 0) (vertex x2 y2 h) (vertex x1 y1 h) (vertex x1 y1 0) (finally (end-shape))) (finally (pop-style))) (finally (pop-matrix))) The (try ... (finally ...)) constructions may not be strictly needed for a Quil drawing, but it’s a good habit to guarantee that stateful context changes are undone, even if problems occur. In a complex Quil drawing the idioms for saving style, translation state, and denoting shapes appear often enough that one hungers for a more compact way of representing each. Here’s one way to do it: (defmacro with-style [& body] (push-style) (try ~@body (finally (pop-style)))) (defmacro with-matrix [& body] (push-matrix) (try ~@body (finally (pop-matrix)))) (defmacro with-shape [& body] (begin-shape) (try ~@body (finally (end-shape)))) The original code then becomes more compact and easier to read: (with-matrix (with-style (fill 255) (no-stroke) (translate [10 10 10]) (with-shape (vertex x1 y1 0) (vertex x2 y2 0) (vertex x2 y2 h) (vertex x1 y1 h) (vertex x1 y1 0)))) In this example code , the contexts with-matrix , etc. appear so often that the resulting savings in lines of code and mental overhead for the reader is substantial. However, the astute reader will realize that the macro definitions themselves are pretty repetitive—in fact, they look almost identical except for the setup and teardown details (this kind of “context manager” pattern is common enough that Python has its own language construct for it). I generally reach for macros when I have a pattern that occurs with obvious repetition that’s not easy to abstract out using just pure functions. Control abstractions such as loops or exception handling are common examples. (I find this situation occurs especially frequently when writing test code). In any case, the solution for our repetitive macros could be something like: (defmacro defcontext [nom setup teardown] `(defmacro ~(symbol (str “with-” nom)) [~'& body#] `(do ~'~setup (try ~@body# (finally ~'~teardown))))) Yikes! I have to admit I had to write a lot of macros, and also refer to this helpful page for reference, before I could write (and grok) this macro. With defcontext in hand, our repetitive macro code just becomes: (defcontext style (push-style) (pop-style)) (defcontext shape (begin-shape) (end-shape)) (defcontext matrix (push-matrix) (pop-matrix)) These are exactly equivalent to the three context macros ( with-* ) defined above. With a little effort, it’s actually not too hard to construct such a nested macro. It’s largely a matter of writing out the code you want to generate, and then writing the code that generates it, testing with macroexpand-1 at the REPL as you go. This page by A. Malloy has a lot of helpful remarks, including this cautionary note: “Think twice before trying to nest macros: it’s usually the wrong answer.” In this case, I actually think it’s the right answer, because the pattern of a context with setup and teardown is so common that I know I’ll reuse this macro for many other things—we have effectively added one of my favorite Python features to Clojure in just a few lines of code To be even more like Python’s context managers, defcontext would want to enable the user to bind some local state resulting from the setup phase of execution (“ with x() as y: ” idiom); examples include file descriptors or database connections. This is left as an exercise for the reader. . There’s a saying in the Clojure community: data > functions > macros . I’m a big believer in this. Clojure’s powerful built-in abstractions for wrangling data in all its forms make it the language I prefer above all others these days. But occasionally that means wrangling the data that is the code itself, thereby reaping the benefits in power, brevity and expressiveness. Image generated by the Quil code used for this example; original code on GitHub is here . about | all posts © 2016 John Jacobsen . Created with unmark . CSS by Tufte-CSS .
2026-01-13T09:30:39
https://llvm.org/doxygen/structNodeArrayNode-members.html
LLVM: Member List LLVM  22.0.0git NodeArrayNode Member List This is the complete list of members for NodeArrayNode , including all inherited members. Array NodeArrayNode ArrayCache Node protected Cache enum name Node dump () const Node FunctionCache Node protected getArrayCache () const Node inline getBaseName () const Node inline virtual getFunctionCache () const Node inline getKind () const Node inline getPrecedence () const Node inline getRHSComponentCache () const Node inline getSyntaxNode (OutputBuffer &) const Node inline virtual hasArray (OutputBuffer &OB) const Node inline hasArraySlow (OutputBuffer &) const Node inline virtual hasFunction (OutputBuffer &OB) const Node inline hasFunctionSlow (OutputBuffer &) const Node inline virtual hasRHSComponent (OutputBuffer &OB) const Node inline hasRHSComponentSlow (OutputBuffer &) const Node inline virtual Kind enum name Node match (Fn F) const NodeArrayNode inline Node (Kind K_, Prec Precedence_=Prec::Primary, Cache RHSComponentCache_=Cache::No, Cache ArrayCache_=Cache::No, Cache FunctionCache_=Cache::No) Node inline Node (Kind K_, Cache RHSComponentCache_, Cache ArrayCache_=Cache::No, Cache FunctionCache_=Cache::No) Node inline NodeArrayNode (NodeArray Array_) NodeArrayNode inline Prec enum name Node print (OutputBuffer &OB) const Node inline printAsOperand (OutputBuffer &OB, Prec P=Prec::Default, bool StrictlyWorse=false) const Node inline printInitListAsType (OutputBuffer &, const NodeArray &) const Node inline virtual printLeft (OutputBuffer &OB) const override NodeArrayNode inline virtual RHSComponentCache Node protected visit (Fn F) const Node ~Node ()=default Node virtual Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
http://eigenhombre.com/lazy-physics.html
Lazy Physics Lazy Physics John Jacobsen ... in which we explore lazy sequences and common functional idioms in Clojure via the example of looking for (nearly) coincident clusters of times in a series. A fundamental technical problem in experimental particle physics is how to distinguish the signatures of particles from instrumental noise. Imagine a tree full of hundreds of sparrows, each nesting on a branch, each chirping away occasionally. Suddenly, for a brief moment, they all start chirping vigorously (maybe a hawk flew past). A clustering of chirps in time is the signal that something has happened! The analogous situation occurs in instruments consisting of many similar detector elements, each generating some amount of random noise that, on its own, is indistinguishable from any evidence left by particles, but which, taken together, signals that, again, something has happened —a muon, an electron, a neutrino has left a sudden spume of electronic evidence in your instrument, waiting to be read out and distinguished from the endless noise. This process of separating the noise from the signal is known in physics as triggering and is typically done through some combination of spatial or time clustering; in many cases, time is the simplest to handle and the first “line of defense” against being overrun by too much data. (It is often impractical to consume all the data generated by all the elements —data reduction is the name of the game at most stages of these experiments.) This data is typically generated continously ad infinitum , and must therefore be processed differently than, say, a single file on disk. Such infinite sequences of data are an excellent fit for the functional pattern known as laziness , in which, rather than chewing up all your RAM and/or hard disk space, data is consumed and transformed only as needed / as available. This kind of processing is baked into Clojure at many levels and throughout its library of core functions, dozens of which can be combined (“composed”) to serve an endless variety of data transformations. (This style of data wrangling is also available in Python via generators and functional libraries such as Toolz .) Prompted by a recent question on the topic from a physicist and former colleague, I got to thinking about the classic problem of triggering, and realized that the time series trigger provides a nice showcase for Clojure’s core library and for processing lazy sequences. The rest of this post will describe a simple trigger, essentially what particle astrophysicists I know call a “simple majority trigger”; or a “simple multiplicity trigger” (depending on whom you talk to). Now for some Clojure code. (A small amount of familiarity with Clojure’s simple syntax is recommended for maximum understanding of what follows.) We will build up our understanding through a series of successively more complex code snippets. The exposition follows closely what one might do in the Clojure REPL, building up successively more complete examples. In each case, we use take to limit what would otherwise be infinite sequences of data (so that our examples can terminate without keeping us waiting forever...). First we create a sorted, infinite series of ever-increasing times (in, say, nsec): (def times (iterate #(+ % (rand-int 1000)) 0)) ;; Caution: infinite sequence... (take 30 times) ;;=> (0 955 1559 2063 2735 2858 3542 4067 4366 5246 5430 6168 7127 7932 8268 8929 9426 9918 10436 10850 11680 12367 12569 13343 14155 14420 15062 15171 15663 16355) times is an infinite (but “unrealized”) series, constructed by iterating the anonymous function #(+ % (rand-int 1000)) which adds a random integer from 0 to 999 to its argument (starting with zero). The fact that it is infinite does not prevent us from defining it or (gingerly) interrogating it via take . To model a Poisson process — one in which any given event time is independent of the future or past times — one would normally choose an exponential rather than a uniformly flat distribution of time differences, but this is not important for our discussion, so, in the interest of simplicity, we’ll go with what we have here. Now, the way we’ll look for excesses is to look for groupings of hits (say, eight of them) whose first and last hit times are within 1 microsecond (1000 nsec) of each other. To start, there is a handy function called partition which groups a series in blocks of fixed length: (take 10 (partition 8 times)) ;;=> ((0 955 1559 2063 2735 2858 3542 4067) (4366 5246 5430 6168 7127 7932 8268 8929) (9426 9918 10436 10850 11680 12367 12569 13343) (14155 14420 15062 15171 15663 16355 16700 16947) (17919 17949 18575 18607 18849 19597 20410 20680) (20737 21289 21315 21323 21426 21637 22422 23000) (23477 24351 24426 25106 25861 26568 27511 28332) (29071 29831 29957 30761 31073 31914 32591 33187) (33878 34739 34842 35674 36444 36960 36983 37400) (37587 38012 38969 39131 39317 40135 40587 40759)) We’ll rewrite this using Clojure’s thread-last macro, which is a very helpful tool for rewriting nested expressions as a more readable pipeline of successive function applications: (->> times (partition 8) (take 10)) ;;=> ((0 955 1559 2063 2735 2858 3542 4067) (4366 5246 5430 6168 7127 7932 8268 8929) ...same as above...) However, this isn’t quite what we want, because it won’t find clusters of times close together who don’t happen to begin on our partition boundaries. To fix this, we use the optional step argument to partition : (->> times (partition 8 1) (take 10)) ;;=> ((0 955 1559 2063 2735 2858 3542 4067) (955 1559 2063 2735 2858 3542 4067 4366) (1559 2063 2735 2858 3542 4067 4366 5246) (2063 2735 2858 3542 4067 4366 5246 5430) (2735 2858 3542 4067 4366 5246 5430 6168) (2858 3542 4067 4366 5246 5430 6168 7127) (3542 4067 4366 5246 5430 6168 7127 7932) (4067 4366 5246 5430 6168 7127 7932 8268) (4366 5246 5430 6168 7127 7932 8268 8929) (5246 5430 6168 7127 7932 8268 8929 9426)) This is getting closer to what we want—if you look carefully, you’ll see that each row consists of the previous one shifted by one element. The next step is to grab (via map ) the first and last times of each group, using juxt to apply both first and last to each subsequence… (->> times (partition 8 1) (map (juxt last first)) (take 10)) ;;=> ([4067 0] [4366 955] [5246 1559] [5430 2063] [6168 2735] [7127 2858] [7932 3542] [8268 4067] [8929 4366] [9426 5246]) … and turn these into time differences: (->> times (partition 8 1) (map (comp (partial apply -) (juxt last first))) (take 10)) ;;=> (4067 3411 3687 3367 3433 4269 4390 4201 4563 4180) Note that so far these time differences are all > 1000. comp , above, turns a collection of multiple functions into a new function which is the composition of these functions, applied successively one after the other (right-to-left). partial turns a function of multiple arguments into a function of fewer arguments, by binding one or more of the arguments in a new function. For example, ((partial + 2) 3) ;;=> 5 ((comp (partial apply -) (juxt last first)) [3 10]) ;;=> 7 Recall that we only want events whose times are close to each other; say, whose duration is under a maximum limit of 1000 nsec. In general, to select only the elements of a sequence which satisfy a filter function, we use filter : (->> times (partition 8 1) (map (comp (partial apply -) (juxt last first))) (filter (partial > 1000)) (take 10)) ;;=> (960 942 827 763 597 682 997 836 986 966) ( (partial > 1000) is a function of one argument which returns true if that argument is strictly less than 1000.) Great! We now have total “durations”; for subsequences of 8 times, where the total durations are less than 1000 nsec. But this is not actually that helpful. It would be better if we could get both the total durations and the actual subsequences satisfying the requirement (the analog of this in a real physics experiment would be returning the actual hit data falling inside the trigger window). To do this, juxt once again comes to the rescue, by allowing us to juxt -apose the original data alongside the total duration to show both together… (->> times (partition 8 1) (map (juxt identity (comp (partial apply -) (juxt last first)))) (take 10)) ;;=> ([(0 309 410 562 979 1423 2180 3159) 3159] [(309 410 562 979 1423 2180 3159 3585) 3276] [(410 562 979 1423 2180 3159 3585 4325) 3915] [(562 979 1423 2180 3159 3585 4325 4573) 4011] [(979 1423 2180 3159 3585 4325 4573 5074) 4095] [(1423 2180 3159 3585 4325 4573 5074 5942) 4519] [(2180 3159 3585 4325 4573 5074 5942 6599) 4419] [(3159 3585 4325 4573 5074 5942 6599 7458) 4299] [(3585 4325 4573 5074 5942 6599 7458 8128) 4543] [(4325 4573 5074 5942 6599 7458 8128 8439) 4114]) ... and adapt our filter slightly to apply our filter only to the time rather than the original data: (->> times (partition 8 1) (map (juxt identity (comp (partial apply -) (juxt last first)))) (filter (comp (partial > 1000) second)) (take 3)) ;;=> ([(1577315 1577322 1577514 1577570 1577793 1577817 1577870 1578151) 836] [(3119967 3120203 3120416 3120469 3120471 3120620 3120715 3120937) 970] [(6752453 6752483 6752522 6752918 6752966 6753008 6753026 6753262) 809]) Finally, to turn this into a function for later use, use defn and remove take : (defn smt-8 [times] (->> times (partition 8 1) (map (juxt identity (comp (partial apply -) (juxt last first)))) (filter (comp (partial > 1000) second)))) smt-8 consumes one, potentially infinite sequence and outputs another, “smaller” (but also potentially infinite) lazy sequence of time-clusters-plus-durations, in the form shown above. Some contemplation will suggest many variants; for example, one in which some number of hits outside the trigger “window” are also included in the output. This is left as an exercise for the advanced reader. A “real” physics trigger would have to deal with many other details: each hit, in addition to its time, would likely have an amplitude, a sensor ID, and other data associated with it. Also, the data may not be perfectly sorted, some sensors may drop out of the data stream, etc. But in some sense this prototypical time clustering algorithm is one of the fundamental building blocks of experimental high energy physics and astrophysics and was used (in some variant) in every experiment I worked on over a 25+ year period. The representation above is certainly one of the most succinct, and shows off the power and elegance of the language, its core library, and lazy sequences. (It is also reasonably fast for such a simple algorithm; smt-8 consumes input times at a rate of about 250 kHz. This is not, however, fast enough for an instrument like IceCube, whose 5160 sensors each count at a rate of roughly 300 Hz, for a total rate of 1.5 MHz. A future post may look at ways to get better performance.) about | all posts © 2016 John Jacobsen . Created with unmark . CSS by Tufte-CSS .
2026-01-13T09:30:39
http://eigenhombre.com/communicating-with-humans.html
Communicating With Humans Communicating With Humans John Jacobsen If nobody but me likes it, let it die. — Knuth This is the fifth post in a series about my Clojure workflow. When you encounter a new codebase, what best allows you to quickly understand it so that you can make effective changes to it? I switched jobs about six months ago. There was intense information transfer both while leaving my old projects behind, and while getting up to speed with new ones. I printed out a lot of code and read it front-to-back, quickly at first, and then carefully. I found this a surprisingly effective way to review and learn, compared to my usual way of navigating code on disk and in an editor solely on an as-needed basis. If this (admittedly old-school) way of understanding a program works well, how much better might it work if there was enough prose interspersed in amongst the code to explain anything non-obvious, and if the order of the text was presented in such a way as to aid understanding? What is the target audience of computer programs, anyways? It is clearly the machines, which have to carry out our insanely specific instructions... but, equally clearly, it is also the humans who have to read, understand, maintain, fix, and extend those programs. It astonishes me now how little attention is paid to this basic fact. In addition to communicating, we also have to think carefully about our work. While not every programming problem is so difficult as to merit a year’s worth of contemplation , any software system of significant size requires continual care, attention, and occasional hard thinking in order to keep complexity under control. The best way I know to think clearly about a problem is to write about it – the harder the problem, the more careful and comprehensive the required writing. Writing aids thinking, because it is slower than thought... because you can replay thoughts over and over, iterate upon and refine them. Because writing is explaining, and because explaining something is the best way I know to learn and understand it. Literate Programming (LP) was invented by Donald Knuth in the 1980s as a way to address some of these concerns. LP has hardcore enthusiasts scattered about, but apparently not much traction in the mainstream. As I have gotten more experience working with complex codebases, and more engaged with the craft or programming, I have become increasingly interested in LP as a way to write good programs. Knuth takes it further, considering the possibility that programs are, or could be, works of literature . Knuth’s innovation was both in realizing these possibilities and in implementing the first system for LP, called WEB. WEB takes a document containing a mix of prose and code and both typesets it in a readable (beautiful, even) form for humans, and also orders and assembles the program for a compiler to consume. Descendents and variants of WEB can be found in use today. My favorite for Clojure is currently Marginalia , originally by Michael Fogus and currently maintained by Gary Deer. Purists of LP will disagree that systems like Marginalia, which do not support reordering and reassembly of source code, are “true” Literate Programming tools; and, in fact, there is a caveat on the Marginalia docs to that effect... but what Marginalia provides is good enough for me: Placement of comments and docstrings adjacent to the code in question; Beautiful formatting of same; Support for Markdown/HTML and attachment of JavaScript and/or CSS files; therefore, for images, mathematics (via MathJax) and graphing (see next blog post). The result of these capabilities is a lightweight tool which lets me take an existing Clojure project and, with very little extra effort, generate a Web-based or printed/PDF artifact which I can sit down with, learn from, and enjoy contemplating. Marginalia in Action: The Notebook Pattern I often start writing by making simple statements or questions: I want to be able to do \(X\).... I don’t understand \(Y\).... If we had feature \(P\), then \(Q\) would be easy.... How long would it take to compute \(Z\)? Sentences like these are like snippets of code in the REPL: things to evaluate and experiment with. Often these statements are attached to bits of code — experimental expressions, and their evaluated results. They are the building blocks of further ideas, programs, and chains of thought. In my next post , I’ll talk about using Marginalia to make small notebooks where I collect written thoughts, code, expression, even graphs and plots while working on a problem. This workflow involves some Marginalia hacks you may not see elsewhere. Meanwhile, here are some quotes about LP: “Instead of writing code containing documentation, the literate programmer writes documentation containing code.... The effect of this simple shift of emphasis can be so profound as to change one’s whole approach to programming.” —Ross Williams, FunnelWeb Tutorial Manual, p.4. “Knuth’s insight is to focus on the program as a message from its author to its readers.” —Jon Bently, “Programming Pearls,” Communications of the ACM, 1986. “... Literate programming is certainly the most important thing that came out of the TeX project. Not only has it enabled me to write and maintain programs faster and more reliably than ever before, and been one of my greatest sources of joy since the 1980s—it has actually been indispensable at times. Some of my major programs, such as the MMIX meta-simulator, could not have been written with any other methodology that I’ve ever heard of. The complexity was simply too daunting for my limited brain to handle; without literate programming, the whole enterprise would have flopped miserably.” —Donald Knuth, interview , 2008. about | all posts © 2016 John Jacobsen . Created with unmark . CSS by Tufte-CSS .
2026-01-13T09:30:39
https://llvm.org/doxygen/DemangleConfig_8h.html#a28c8717f547a900fe4e642a5fb23a2d6
LLVM: include/llvm/Demangle/DemangleConfig.h File Reference LLVM  22.0.0git include llvm Demangle Macros DemangleConfig.h File Reference #include "llvm/Config/llvm-config.h" #include <cassert> Go to the source code of this file. Macros #define  __has_feature (x) #define  __has_cpp_attribute (x) #define  __has_attribute (x) #define  __has_builtin (x) #define  DEMANGLE_GNUC_PREREQ (maj, min, patch) #define  DEMANGLE_ATTRIBUTE_USED #define  DEMANGLE_UNREACHABLE #define  DEMANGLE_ATTRIBUTE_NOINLINE #define  DEMANGLE_DUMP_METHOD     DEMANGLE_ATTRIBUTE_NOINLINE DEMANGLE_ATTRIBUTE_USED #define  DEMANGLE_FALLTHROUGH #define  DEMANGLE_ASSERT (__expr, __msg) #define  DEMANGLE_NAMESPACE_BEGIN    namespace llvm { namespace itanium_demangle { #define  DEMANGLE_NAMESPACE_END    } } #define  DEMANGLE_ABI   DEMANGLE_ABI is the export/visibility macro used to mark symbols delcared in llvm/Demangle as exported when built as a shared library. Macro Definition Documentation ◆  __has_attribute #define __has_attribute ( x ) Value: 0 Definition at line 30 of file DemangleConfig.h . ◆  __has_builtin #define __has_builtin ( x ) Value: 0 Definition at line 34 of file DemangleConfig.h . ◆  __has_cpp_attribute #define __has_cpp_attribute ( x ) Value: 0 Definition at line 26 of file DemangleConfig.h . ◆  __has_feature #define __has_feature ( x ) Value: 0 Definition at line 22 of file DemangleConfig.h . ◆  DEMANGLE_ABI #define DEMANGLE_ABI DEMANGLE_ABI is the export/visibility macro used to mark symbols delcared in llvm/Demangle as exported when built as a shared library. Definition at line 115 of file DemangleConfig.h . Referenced by llvm::ms_demangle::Node::output() , parse_discriminator() , and llvm::ms_demangle::Demangler::~Demangler() . ◆  DEMANGLE_ASSERT #define DEMANGLE_ASSERT ( __expr , __msg  ) Value: assert ((__expr) && (__msg)) assert assert(UImm &&(UImm !=~static_cast< T >(0)) &&"Invalid immediate!") Definition at line 94 of file DemangleConfig.h . Referenced by OutputBuffer::back() , PODSmallVector< Node *, 8 >::back() , ExplicitObjectParameter::ExplicitObjectParameter() , SpecialSubstitution::getBaseName() , AbstractManglingParser< Derived, Alloc >::OperatorInfo::getSymbol() , OutputBuffer::insert() , PODSmallVector< Node *, 8 >::operator[]() , AbstractManglingParser< Derived, Alloc >::parseTemplateParam() , AbstractManglingParser< Derived, Alloc >::parseUnresolvedName() , PODSmallVector< Node *, 8 >::pop_back() , AbstractManglingParser< Derived, Alloc >::popTrailingNodeArray() , PODSmallVector< Node *, 8 >::shrinkToSize() , Node::visit() , and AbstractManglingParser< Derived, Alloc >::ScopedTemplateParamList::~ScopedTemplateParamList() . ◆  DEMANGLE_ATTRIBUTE_NOINLINE #define DEMANGLE_ATTRIBUTE_NOINLINE Definition at line 69 of file DemangleConfig.h . ◆  DEMANGLE_ATTRIBUTE_USED #define DEMANGLE_ATTRIBUTE_USED Definition at line 53 of file DemangleConfig.h . ◆  DEMANGLE_DUMP_METHOD #define DEMANGLE_DUMP_METHOD    DEMANGLE_ATTRIBUTE_NOINLINE DEMANGLE_ATTRIBUTE_USED Definition at line 73 of file DemangleConfig.h . Referenced by Node::dump() . ◆  DEMANGLE_FALLTHROUGH #define DEMANGLE_FALLTHROUGH Definition at line 85 of file DemangleConfig.h . Referenced by AbstractManglingParser< Derived, Alloc >::parseType() . ◆  DEMANGLE_GNUC_PREREQ #define DEMANGLE_GNUC_PREREQ ( maj , min , patch  ) Value: 0 Definition at line 46 of file DemangleConfig.h . ◆  DEMANGLE_NAMESPACE_BEGIN #define DEMANGLE_NAMESPACE_BEGIN   namespace llvm { namespace itanium_demangle { Definition at line 97 of file DemangleConfig.h . ◆  DEMANGLE_NAMESPACE_END #define DEMANGLE_NAMESPACE_END   } } Definition at line 98 of file DemangleConfig.h . ◆  DEMANGLE_UNREACHABLE #define DEMANGLE_UNREACHABLE Definition at line 61 of file DemangleConfig.h . Referenced by demanglePointerCVQualifiers() , ExpandedSpecialSubstitution::getBaseName() , and AbstractManglingParser< Derived, Alloc >::parseExpr() . Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://llvm.org/doxygen/structNestedName.html#a93d28c80cdb4cee6ab5f8324b8950127
LLVM: NestedName Struct Reference LLVM  22.0.0git Public Member Functions | Public Attributes | List of all members NestedName Struct Reference #include " llvm/Demangle/ItaniumDemangle.h " Inheritance diagram for NestedName: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. [ legend ] Public Member Functions   NestedName ( Node *Qual_, Node *Name_) template<typename Fn> void  match (Fn F ) const std::string_view  getBaseName () const override void  printLeft ( OutputBuffer &OB) const override Public Member Functions inherited from Node   Node ( Kind K_, Prec Precedence_= Prec::Primary , Cache RHSComponentCache_= Cache::No , Cache ArrayCache_= Cache::No , Cache FunctionCache_= Cache::No )   Node ( Kind K_, Cache RHSComponentCache_, Cache ArrayCache_= Cache::No , Cache FunctionCache_= Cache::No ) template<typename Fn> void  visit (Fn F ) const   Visit the most-derived object corresponding to this object. bool   hasRHSComponent ( OutputBuffer &OB) const bool   hasArray ( OutputBuffer &OB) const bool   hasFunction ( OutputBuffer &OB) const Kind   getKind () const Prec   getPrecedence () const Cache   getRHSComponentCache () const Cache   getArrayCache () const Cache   getFunctionCache () const virtual bool   hasRHSComponentSlow ( OutputBuffer &) const virtual bool   hasArraySlow ( OutputBuffer &) const virtual bool   hasFunctionSlow ( OutputBuffer &) const virtual const Node *  getSyntaxNode ( OutputBuffer &) const void  printAsOperand ( OutputBuffer &OB, Prec P = Prec::Default , bool StrictlyWorse=false) const void  print ( OutputBuffer &OB) const virtual bool   printInitListAsType ( OutputBuffer &, const NodeArray &) const virtual  ~Node ()=default DEMANGLE_DUMP_METHOD void  dump () const Public Attributes Node *  Qual Node *  Name Additional Inherited Members Public Types inherited from Node enum   Kind : uint8_t enum class   Cache : uint8_t { Yes , No , Unknown }   Three-way bool to track a cached value. More... enum class   Prec : uint8_t {    Primary , Postfix , Unary , Cast ,    PtrMem , Multiplicative , Additive , Shift ,    Spaceship , Relational , Equality , And ,    Xor , Ior , AndIf , OrIf ,    Conditional , Assign , Comma , Default }   Operator precedence for expression nodes. More... Protected Attributes inherited from Node Cache   RHSComponentCache : 2   Tracks if this node has a component on its right side, in which case we need to call printRight. Cache   ArrayCache : 2   Track if this node is a (possibly qualified) array type. Cache   FunctionCache : 2   Track if this node is a (possibly qualified) function type. Detailed Description Definition at line 1077 of file ItaniumDemangle.h . Constructor & Destructor Documentation ◆  NestedName() NestedName::NestedName ( Node * Qual_ , Node * Name_  ) inline Definition at line 1081 of file ItaniumDemangle.h . References Name , Node::Node() , and Qual . Member Function Documentation ◆  getBaseName() std::string_view NestedName::getBaseName ( ) const inline override virtual Reimplemented from Node . Definition at line 1086 of file ItaniumDemangle.h . References Name . ◆  match() template<typename Fn> void NestedName::match ( Fn F ) const inline Definition at line 1084 of file ItaniumDemangle.h . References F , Name , and Qual . ◆  printLeft() void NestedName::printLeft ( OutputBuffer & OB ) const inline override virtual Implements Node . Definition at line 1088 of file ItaniumDemangle.h . References Name , Node::OutputBuffer , and Qual . Member Data Documentation ◆  Name Node * NestedName::Name Definition at line 1079 of file ItaniumDemangle.h . Referenced by getBaseName() , match() , NestedName() , and printLeft() . ◆  Qual Node * NestedName::Qual Definition at line 1078 of file ItaniumDemangle.h . Referenced by match() , NestedName() , and printLeft() . The documentation for this struct was generated from the following file: include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/transportation/
TIME for Kids | Transportation | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Transportation World Commuting in the Clouds January 1, 2026 Paris, France, unveiled a three-mile cable car route on December 13. It’s the first cable car route in the region, and the longest urban route in Europe. Commuters in the suburbs of Paris can now float over rush-hour traffic. The… Audio United States Taking Flight October 9, 2025 The chaos of a busy airport. Loud noises on a plane. For some people—particularly those with intellectual and developmental disabilities —air travel can be overwhelming. The Arc is an organization that supports people with disabilities. It has found a way… Audio Spanish Environment Chemical Cleanup March 3, 2023 Cleanup of toxic waste continues in East Palestine, Ohio, after a fiery freight-train derailment last month. Tons of solid and liquid waste have been moved by truck to treatment sites. On February 3, 38 freight cars went off the tracks.… Audio Environment Gas-Powered Car Ban September 2, 2022 California is phasing out gas-powered vehicles. On August 25, officials said all new cars sold in the state will have to be electric or hydrogen-powered by 2035. Transportation accounts for about half of California’s greenhouse-gas emissions, which contribute to climate… Audio World Going Electric May 16, 2022 New Zealand’s government has a new plan. It was announced on May 16. The goal is to reduce greenhouse-gas emissions. The government will pay New Zealanders to switch from gas-powered cars to hybrid or electric cars. This is part of… Audio United States Pain at the Pump March 18, 2022 Gas prices continue to rise. On March 13, the average cost of a gallon of regular gas in the United States was $4.43. That’s $1.54 higher than it was a year ago. “It’s very expensive, high for people who are… Audio Technology An Electric Future April 9, 2021 In January, one of the world’s major automakers, General Motors (GM), announced that it would stop selling gas-powered cars by 2035. The company says it will make more battery-powered vehicles. In March, Swedish automaker Volvo stepped up the timeline. It… Audio Spanish Health Vaccine Trackers January 15, 2021 As the delivery truck snaked its way over Northern California’s highways, analysts watched every aspect of its journey. They could see the stops the driver made. They knew the weather outside. Most important, they knew the condition of the precious… Audio Spanish Technology Car Takes Flight September 3, 2020 Flying cars are no longer a dream of the future. SkyDrive Inc., in Japan, announced on August 28 that a pilot had successfully flown one of its cars. There are more than 100 flying-car projects in the world, according to… Audio Business Bike Boom June 18, 2020 It was March when much of the world began to lock down to stop the spread of the coronavirus. The pandemic forced gyms to close. It made people fearful of public transit. And it left families cooped up at home… Audio Posts pagination 1 2 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/classPODSmallVector.html#a80e7febb9abfe2e7a22d2b1823f9c526
LLVM: PODSmallVector< T, N > Class Template Reference LLVM  22.0.0git Public Member Functions | List of all members PODSmallVector< T, N > Class Template Reference #include " llvm/Demangle/ItaniumDemangle.h " Inheritance diagram for PODSmallVector< T, N >: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. [ legend ] Public Member Functions   PODSmallVector ()   PODSmallVector ( const PODSmallVector &)=delete PODSmallVector &  operator= ( const PODSmallVector &)=delete   PODSmallVector ( PODSmallVector &&Other) PODSmallVector &  operator= ( PODSmallVector &&Other) void  push_back ( const T &Elem) void  pop_back () void  shrinkToSize (size_t Index) T *  begin () T *  end () bool   empty () const size_t  size () const T &  back () T &  operator[] (size_t Index) void  clear ()   ~PODSmallVector () Detailed Description template<class T , size_t N> class PODSmallVector< T, N > Definition at line 41 of file ItaniumDemangle.h . Constructor & Destructor Documentation ◆  PODSmallVector() [1/3] template<class T , size_t N> PODSmallVector < T , N > ::PODSmallVector ( ) inline Definition at line 77 of file ItaniumDemangle.h . ◆  PODSmallVector() [2/3] template<class T , size_t N> PODSmallVector < T , N > ::PODSmallVector ( const PODSmallVector < T , N > & ) delete ◆  PODSmallVector() [3/3] template<class T , size_t N> PODSmallVector < T , N > ::PODSmallVector ( PODSmallVector < T , N > && Other ) inline Definition at line 82 of file ItaniumDemangle.h . ◆  ~PODSmallVector() template<class T , size_t N> PODSmallVector < T , N >::~ PODSmallVector ( ) inline Definition at line 156 of file ItaniumDemangle.h . Member Function Documentation ◆  back() template<class T , size_t N> T & PODSmallVector < T , N >::back ( ) inline Definition at line 146 of file ItaniumDemangle.h . ◆  begin() template<class T , size_t N> T * PODSmallVector < T , N >::begin ( ) inline Definition at line 141 of file ItaniumDemangle.h . Referenced by PODSmallVector< Node *, 8 >::operator[]() . ◆  clear() template<class T , size_t N> void PODSmallVector < T , N >::clear ( ) inline Definition at line 154 of file ItaniumDemangle.h . ◆  empty() template<class T , size_t N> bool PODSmallVector < T , N >::empty ( ) const inline Definition at line 144 of file ItaniumDemangle.h . ◆  end() template<class T , size_t N> T * PODSmallVector < T , N >::end ( ) inline Definition at line 142 of file ItaniumDemangle.h . ◆  operator=() [1/2] template<class T , size_t N> PODSmallVector & PODSmallVector < T , N >::operator= ( const PODSmallVector < T , N > & ) delete ◆  operator=() [2/2] template<class T , size_t N> PODSmallVector & PODSmallVector < T , N >::operator= ( PODSmallVector < T , N > && Other ) inline Definition at line 96 of file ItaniumDemangle.h . ◆  operator[]() template<class T , size_t N> T & PODSmallVector < T , N >::operator[] ( size_t Index ) inline Definition at line 150 of file ItaniumDemangle.h . ◆  pop_back() template<class T , size_t N> void PODSmallVector < T , N >::pop_back ( ) inline Definition at line 131 of file ItaniumDemangle.h . ◆  push_back() template<class T , size_t N> void PODSmallVector < T , N >::push_back ( const T & Elem ) inline Definition at line 124 of file ItaniumDemangle.h . Referenced by AbstractManglingParser< Derived, Alloc >::parseTemplateParamDecl() . ◆  shrinkToSize() template<class T , size_t N> void PODSmallVector < T , N >::shrinkToSize ( size_t Index ) inline Definition at line 136 of file ItaniumDemangle.h . ◆  size() template<class T , size_t N> size_t PODSmallVector < T , N >::size ( ) const inline Definition at line 145 of file ItaniumDemangle.h . Referenced by PODSmallVector< Node *, 8 >::operator[]() , PODSmallVector< Node *, 8 >::push_back() , and PODSmallVector< Node *, 8 >::shrinkToSize() . The documentation for this class was generated from the following file: include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/places/
TIME for Kids | Places | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Places World World’s Greatest Places December 19, 2024 Every year, TIME makes a list. It shows the world’s greatest places. TIME for Kids picks some favorites. Where would you go first? Sweet Tooth Manam Chocolate is a chocolate factory. It is in India. Chocolate is made from cacao… Audio Spanish Community Around School September 6, 2024 Time for school! Students go there every weekday. It is a community they are part of. Here are four spaces at school. Library The school library has books. Students can take them home. Or they can read them there. The… Audio Spanish Community In Your Community August 30, 2024 A community is a place where people live, work, and play. There are homes. There are businesses and schools. Take a walk or a ride around your community. What do you see? Stores There are places to shop in… Audio Spanish Community Where Do You Live? August 23, 2024 A community is a group of people. The group might live, work, or play together. A community can also be a place where people live. There are different types of places. Read about three. What is your community like? City… Audio Spanish United States United States National Parks January 19, 2024 National parks are natural areas that are protected by the government. Take a look at some of the national parks in the United States of America. Yosemite National Park This park is in California. There are giant sequoia trees.… Audio Spanish World World's Greatest Places October 12, 2023 Want to explore the world? Each year, TIME magazine makes a list. It picks some of the World’s Greatest Places. TIME for Kids picks our favorites. Get ready for an adventure! Island Living Dominica is an island. It is… Audio Spanish United States School of the Future August 24, 2023 It is back-to-school time! Ehrman Crest Elementary and Middle School opened last year. The building was designed for learning and fun. The building is shaped like a Y. The elementary school is on the left. The right side is… Audio Spanish Community Spaces at School August 24, 2023 Your classroom is where you spend the most time at school. Here are four other spaces at school. What are these spaces like at your school? Library You can borrow books from the school library (above). The librarian might read… Audio Community Check the Library! March 23, 2022 Libraries let you borrow books. Many neighborhoods have their own library. Many schools have a library. Take a look at what is at the library. Rows of Books Libraries have all kinds of books. There are sections for fiction and… Audio Spanish Community Tiny Libraries March 23, 2022 Not every library is in a big building. Some libraries are very small. A Little Free Library can be the size of a birdhouse! Some little libraries are even built in the trunks of old trees. A Little Free Library lets… Audio Posts pagination 1 2 3 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://github.com/yangshun/front-end-interview-handbook/edit/main/contents/companies/uber-front-end-interview-questions.md
Sign in to GitHub · GitHub Skip to content You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert Sign in to GitHub {{ message }} --> Username or email address Password Forgot password? Uh oh! There was an error while loading. Please reload this page . New to GitHub? Create an account Sign in with a passkey Terms Privacy Docs Contact GitHub Support Manage cookies Do not share my personal information You can’t perform that action at this time.
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/young-game-changers/
TIME for Kids | Young Game Changers | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Young Game Changers World Girl Power August 14, 2025 TIME and the LEGO Group joined up to celebrate the accomplishments of extraordinary girls who are making a big impact in their fields, from science and engineering to arts and athletics. Each barrier-breaking standout is paving the way for others… Audio Community Meet Our Kid Reporters August 13, 2025 We’ve got new faces joining the TIME for Kids crew! Read about the 2025–2026 team of TFK Kid Reporters. And keep an eye out for their articles in the magazine and online this school year. Asha Curley, 11 Scottsdale, Arizona … Audio Community Are You Ready to Report? May 12, 2025 Do you have what it takes to write and report for TIME for Kids? Apply now for a chance to contribute to our magazines and website. TFK editors will choose up to 10 talented students to be TFK Kid Reporters… Audio Kid Reporters at Work December 19, 2024 Our TFK Kid Reporter squad has been crushing it this school year. They’ve been up to some seriously cool stuff: reviewing hit movies, diving into award-winning books, chatting with government leaders, grilling science experts, and reporting on stories that readers… Audio Community The Kid Report: Kindness Counts September 26, 2024 Whenever I’m out and about, I always thank people for the job they’re doing. At a sporting event, at the mall, or while doing errands, I give workers a fist bump. And I make sure to use their name tags… Audio United States Incredible Kids August 15, 2024 TIME’s Kid of the Year honor recognizes young people who are making a positive impact. In addition to this year’s winner, Heman Bekele, five honorees were selected. This was done with the help of TIME and TIME for Kids editors,… Audio Community Meet Our Kid Reporters August 14, 2024 We’ve got some new faces joining the TIME for Kids crew! Read about the 2024–2025 team of TFK Kid Reporters. Look for their articles in the magazine and online this school year. Meyer Ballas, 12 Los Angeles, California Meyer plays… Audio Community Kindness Catch-Up April 19, 2024 TIME’s 2021 Kid of the Year, Orion Jean, has continued his mission to spread kindness. TFK Kid Reporter Harper Carroll spoke with him about his latest projects. Thirteen-year-old Orion Jean is a kindness ambassador. The TIME 2021 Kid of the… Audio Community Want to Get the Scoop? March 13, 2024 It’s back! The TFK Kid Reporter Contest is now open. Think you have what it takes? Apply for a chance to report for our magazines and website. TFK editors will choose up to 10 talented students to be TFK Kid… Audio United States 8 Questions for Sariel Sandoval & Claire Vlases January 16, 2024 In December, TFK Kid Reporter Ninis Twumasi presented the TIME Earth Award at an event in New York City. It was given to the 16 plaintiffs who sued Montana for violating their right to a clean environment. Before the event,… Audio Posts pagination 1 2 3 4 5 … 9 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL © 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/classPODSmallVector.html#a8e28eb74c8be25f5a32e9e79db5377e6
LLVM: PODSmallVector< T, N > Class Template Reference LLVM  22.0.0git Public Member Functions | List of all members PODSmallVector< T, N > Class Template Reference #include " llvm/Demangle/ItaniumDemangle.h " Inheritance diagram for PODSmallVector< T, N >: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. [ legend ] Public Member Functions   PODSmallVector ()   PODSmallVector ( const PODSmallVector &)=delete PODSmallVector &  operator= ( const PODSmallVector &)=delete   PODSmallVector ( PODSmallVector &&Other) PODSmallVector &  operator= ( PODSmallVector &&Other) void  push_back ( const T &Elem) void  pop_back () void  shrinkToSize (size_t Index) T *  begin () T *  end () bool   empty () const size_t  size () const T &  back () T &  operator[] (size_t Index) void  clear ()   ~PODSmallVector () Detailed Description template<class T , size_t N> class PODSmallVector< T, N > Definition at line 41 of file ItaniumDemangle.h . Constructor & Destructor Documentation ◆  PODSmallVector() [1/3] template<class T , size_t N> PODSmallVector < T , N > ::PODSmallVector ( ) inline Definition at line 77 of file ItaniumDemangle.h . ◆  PODSmallVector() [2/3] template<class T , size_t N> PODSmallVector < T , N > ::PODSmallVector ( const PODSmallVector < T , N > & ) delete ◆  PODSmallVector() [3/3] template<class T , size_t N> PODSmallVector < T , N > ::PODSmallVector ( PODSmallVector < T , N > && Other ) inline Definition at line 82 of file ItaniumDemangle.h . ◆  ~PODSmallVector() template<class T , size_t N> PODSmallVector < T , N >::~ PODSmallVector ( ) inline Definition at line 156 of file ItaniumDemangle.h . Member Function Documentation ◆  back() template<class T , size_t N> T & PODSmallVector < T , N >::back ( ) inline Definition at line 146 of file ItaniumDemangle.h . ◆  begin() template<class T , size_t N> T * PODSmallVector < T , N >::begin ( ) inline Definition at line 141 of file ItaniumDemangle.h . Referenced by PODSmallVector< Node *, 8 >::operator[]() . ◆  clear() template<class T , size_t N> void PODSmallVector < T , N >::clear ( ) inline Definition at line 154 of file ItaniumDemangle.h . ◆  empty() template<class T , size_t N> bool PODSmallVector < T , N >::empty ( ) const inline Definition at line 144 of file ItaniumDemangle.h . ◆  end() template<class T , size_t N> T * PODSmallVector < T , N >::end ( ) inline Definition at line 142 of file ItaniumDemangle.h . ◆  operator=() [1/2] template<class T , size_t N> PODSmallVector & PODSmallVector < T , N >::operator= ( const PODSmallVector < T , N > & ) delete ◆  operator=() [2/2] template<class T , size_t N> PODSmallVector & PODSmallVector < T , N >::operator= ( PODSmallVector < T , N > && Other ) inline Definition at line 96 of file ItaniumDemangle.h . ◆  operator[]() template<class T , size_t N> T & PODSmallVector < T , N >::operator[] ( size_t Index ) inline Definition at line 150 of file ItaniumDemangle.h . ◆  pop_back() template<class T , size_t N> void PODSmallVector < T , N >::pop_back ( ) inline Definition at line 131 of file ItaniumDemangle.h . ◆  push_back() template<class T , size_t N> void PODSmallVector < T , N >::push_back ( const T & Elem ) inline Definition at line 124 of file ItaniumDemangle.h . Referenced by AbstractManglingParser< Derived, Alloc >::parseTemplateParamDecl() . ◆  shrinkToSize() template<class T , size_t N> void PODSmallVector < T , N >::shrinkToSize ( size_t Index ) inline Definition at line 136 of file ItaniumDemangle.h . ◆  size() template<class T , size_t N> size_t PODSmallVector < T , N >::size ( ) const inline Definition at line 145 of file ItaniumDemangle.h . Referenced by PODSmallVector< Node *, 8 >::operator[]() , PODSmallVector< Node *, 8 >::push_back() , and PODSmallVector< Node *, 8 >::shrinkToSize() . The documentation for this class was generated from the following file: include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by  1.14.0
2026-01-13T09:30:39
https://aws.amazon.com/blogs/networking-and-content-delivery/implementing-consistent-dns-query-logging-with-amazon-route-53-profiles/
Implementing consistent DNS Query Logging with Amazon Route 53 Profiles | Networking &amp; Content Delivery Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions Networking &amp; Content Delivery Implementing consistent DNS Query Logging with Amazon Route 53 Profiles by Aanchal Agrawal and Anushree Shetty on 05 JAN 2026 in Amazon Route 53 , AWS Transit Gateway , Intermediate (200) , Networking &amp; Content Delivery , Resource Access Manager (RAM) , Security, Identity, &amp; Compliance Permalink Share Managing DNS query logging across multiple Amazon Virtual Private Clouds (VPCs) has long been a significant challenge for enterprise teams. The traditional approach required manual configuration of DNS query logging for each VPC individually, creating a cascade of operational problems. This fragmented process led to inconsistent implementation across different environments, compliance gaps due to missed or misconfigured VPCs, and substantial operational burden from repetitive manual setup tasks. Teams often found themselves lacking comprehensive visibility into DNS activities across their entire AWS footprint, making troubleshooting complex when issues spanned multiple VPCs. We’re excited to announce a solution that addresses these pain points head-on. Amazon Route 53 Resolver Query Logging now integrates seamlessly with Amazon Route 53 Profiles , offering enterprise teams a centralized approach to DNS query management. You can use Route 53 Resolver Query Logging to log DNS queries that originate in your Amazon VPCs. With query logging enabled, you can observe which domain names have been queried, the AWS resources from which the queries originated, and the responses that were received.&nbsp;This intermediate level post highlights integration of Route 53 Profiles with Route 53 Resolver Query Logging. You can use Route 53 Profiles to simplify the management of DNS Query Logging, configuring logging once at the Profile level with automatic propagation to all associated VPCs, removing manual per-VPC configuration while providing consistent logging policies across expanding AWS infrastructures. This centralization significantly reduces operational complexity and management overhead, streamlines compliance verification, and prevents configuration drift across large-scale VPC deployments. The integration uses AWS Resource Access Manager (AWS RAM) to facilitate secure sharing of these configurations across organizational boundaries, so that even the most complex multi-account architectures maintain comprehensive DNS visibility. This technical guide is designed for administrators, cloud architects, and security professionals who manage multi-account AWS environments with complex DNS configurations. You’ll discover how to dramatically reduce management overhead while strengthening security visibility and governance across your infrastructure. To get the most from this post, we recommend having foundational knowledge of key AWS networking services—including Amazon VPC, Amazon Route 53 Resolver , Amazon Route 53 Profiles, and AWS RAM along with basic DNS principles. What are Route 53 Profiles? Route 53 Profiles enables consistent DNS management so that you can establish standardized DNS configurations called Profiles, which encapsulate comprehensive DNS settings. These Profiles maintain uniformity across your DNS infrastructure by incorporating private hosted zones and their configurations, Route 53 Resolver rules (encompassing both forwarding and system rules), DNS Firewall rule groups, and Interface VPC endpoints. The Profile directly manages certain VPC-level DNS configurations, such as Reverse DNS lookup configuration for Resolver Rules, DNS Firewall failure mode configuration, and DNSSEC validation configuration. You can define DNS settings once and apply them consistently across multiple VPCs and AWS accounts, streamlining management, providing uniformity and consistency, and enhancing scalability as your AWS environment grows. This centralized approach streamlines DNS administration by automatically propagating updates to all associated VPCs. AWS RAM facilitates Profile sharing for cross-account management within the same AWS Region. Route 53 Resolver Query Logging Route 53 Resolver Query Logging logs all DNS queries processed by Route 53, the ones that originate from your VPC resources (such as Amazon Elastic Compute Cloud (Amazon EC2) instances, containers, or AWS Lambda functions) and the traffic processed by Route 53 Resolver endpoints. The logs capture information for queries that: Resolve local VPC DNS names Resolve to Route 53 private hosted zones Are forwarded to on-premises DNS servers through Route 53 Resolver Endpoints Are resolved over the public internet By default, all VPCs use the Route 53 Resolver to resolve DNS queries, and this feature captures a record of those requests and their responses. Each log entry includes the VPC ID, query timestamp, domain name requested (Query Name), type of DNS record sought (Query Type), DNS response code (such as NOERROR or NXDOMAIN), and the specific source IP and resource ID that initiated the query. When these logs are enabled, they publish to a central destination for analysis and retention, such as Amazon Simple Storage Service (Amazon S3) , Amazon CloudWatch Logs , or Amazon Kinesis Data Firehose , with the requirement that these destinations must reside in the same Region as the query logging configuration. Route 53 Resolver Query Logging delivers essential visibility into your network’s DNS activity. It functions as a critical security tool for detecting malicious activity such as malware communication or data exfiltration via anomalous DNS queries. For compliance and audit purposes, it provides a detailed record of all name resolution activity. The service troubleshoots and creates a visibility pane for you to quickly diagnose DNS failures by revealing the source, the domain requested, and the response received. Challenges with consistent Route 53 Resolver Query Logging Maintaining a consistent DNS query logging with Route 53 Resolver involves creating query logging configurations in an AWS account and sharing these configurations with multiple accounts using AWS RAM. Therefore, each account can associate its VPCs with the shared logging configuration, so that logs can be collected in a centralized location such as CloudWatch Logs or an S3 bucket. However, challenges exist in this approach, including hard limits on the number of VPCs that can be associated per account and per AWS Region (typically 100) , and the fact that only the owning account can modify or delete the shared configurations. If the shared logging configuration is deleted or unshared, then DNS query logging stops for all associated VPCs, which can complicate management. Furthermore, implementing a unified logging solution that consolidates logs across multiple VPCs and accounts introduces significant complexity and increases the potential for configuration errors. Similarly, designing separate centralized logging systems for different environments (such as development, testing, and production) necessitates careful architecture and maintenance to avoid reliability issues. Integration with Route 53 Profiles You can use this new feature, Route 53 Resolver Query Logging integration with Route 53 Profiles,&nbsp;to implement DNS query logging across multiple VPCs through a single Profile configuration. This removes the need to configure logging separately for each VPC. Key benefits with this integration: Consistent configuration: Previously, DNS Query Logging implementation necessitated individual manual configuration for each Amazon Virtual Private Cloud (Amazon VPC) , resulting in considerable administrative burden as environments expanded. The introduction of Route 53 Profiles transforms this experience through centralized management, so that now you can configure Query Logging once at the Profile level, and the settings propagate automatically to all associated VPCs. This significant enhancement reduces operational complexity and provides consistent logging implementation across your growing AWS infrastructure. Operational efficiency: Network administrators define query logging configurations once and apply them consistently across their infrastructure, significantly reducing management overhead. Scale management: Enterprises managing large VPC fleets implement consistent logging policies through centralized profiles rather than managing individual configurations. Simplified compliance: Security teams ensure all VPCs adhere to logging requirements by associating them with properly configured profiles, making compliance verification clearer. Reduced configuration drift: Organizations can centralize logging configurations in profiles to minimize the risk of inconsistent settings across their environment. The integration works seamlessly with existing log destinations, supporting CloudWatch Logs, Amazon S3, and Amazon Kinesis Data Firehose. When a VPC is associated with a profile containing query logging configurations, DNS queries from that VPC are automatically logged to the specified destinations Centralizing and associating Route 53 Resolver Query Logging across accounts Prior to this launch, centralizing DNS query logs was a more cumbersome process to manage. In this section we examine the following two figures. Both figures share several common elements: An AWS Region encompassing all of the resources A Production account with a Production VPC A Development (Dev) account with a Dev VPC A Shared Services account with a Shared Services VPC A pre-configured AWS Transit Gateway in the Shared Services Account The Transit Gateway has attachments to the Shared Services VPC, Production VPC, and the Dev VPC Route 53 Resolver Query Logging enabled in the Shared Services Account AWS RAM for resource sharing Associating Route 53 Resolver Query Logging across accounts without Route 53 Profiles First we investigate Figure 1 and follow the steps for how Route 53 Resolver Query Logging was shared across different AWS accounts. Figure 1: Traditional approach – Sharing Route 53 Resolver Query Logging with other accounts without using Route 53 Profiles Based on Figure 1, these are the steps that were followed: Enable Route 53 Resolver Query Logging in the Shared Services account. The Query Logging is then shared with the other two accounts (Production and Dev) through AWS RAM as per Steps 2–4 When it is shared with the other accounts, Query logging needs to be manually associated with the VPCs. Associating Route 53 Resolver Query Logging across accounts with Route 53 Profiles With the Route 53 Profiles as shown in Figure 2, the process is streamlined: Figure 2: Sharing Amazon Route 53 Resolver Query Logging via Amazon Route 53 Profile Based on Figure 2, the steps would be as follows: Enable Route 53 Resolver Query Logging in the Shared Services account. Create a Route 53 Profiles in the Shared Services account. Associate Route 53 Resolver Query Logging with the Route 53 Profile. The Route 53 Profiles is shared with the Production and Dev accounts through AWS RAM. Associate the Production and Dev VPCs with the Profile. The VPCs automatically gain access to Route 53 Resolver Query Logging through their association&nbsp;(you can find the steps to associate resources in the Route 53 Profiles association documentation provided) with the Route 53 Profiles. Before this feature was launched, enabling Query Logging necessitated manual configuration for each Amazon VPC individually. This created significant operational overhead as infrastructure grew. Route 53 Profiles streamlines this process by attaching Query Logging to a Profile. In turn, the logging configuration is automatically applied to all VPCs associated with that Profile, thus streamlining management at scale. Dual association scenario If a VPC has both a direct Route 53 Resolver Query Logging association and Route 53 Profile based association, then the logs are generated and stored in two separate locations and may result in duplicate logging. To prevent redundant logging entries, implement a staged transition when adopting Profile-based query logging. First, create and associate your new logging configuration with the appropriate Profiles, then validate its proper functioning, and finally remove any pre-existing query logging configurations by stopping the logging from the VPCs and deleting it for the ones that are directly associated with individual VPCs. Direct VPC association logs maintain the existing format: (vpc-id_instance-id) Profile-based association logs use the new format: (profile-id_vpc-id_instance-id) Key considerations for centralizing Route 53 Resolver Query Logging with Route 53 Profiles Sharing resources with Route 53 Profiles works only within the same Region. The account with which the resources have been shared can’t modify or delete the configuration. If the configuration is deleted or unshared, then consolidated logging stops for all of the associated VPCs. Cross-account resource sharing through AWS RAM necessitates that both the resource owner and the sharing account have appropriate AWS Identity and Access Management (IAM) permissions to create and manage the resource share. Without these permissions, access is restricted, and effective sharing or management of resources cannot be established. You can read more about the permissions in the AWS RAM documentation . Consolidated logging enhances data governance by enabling consistent access controls and minimizing human access, with automated systems handling read operations. Implement monitoring to alert on any write or admin access to the log storage. Route 53 Profiles and Route 53 Query logging offer comprehensive support for both IPv4 and IPv6 protocols. This provides full compatibility with modern network environments. Furthermore, organizations can use this dual-protocol support to effectively manage and monitor DNS queries across both address formats, providing enhanced visibility and control over network traffic regardless of the IP version in use. Availability and pricing Route 53 Profiles is available in all AWS Regions except Asia Pacific (New Zealand) and Asia Pacific (Taipei). For Route 53 Resolver Query logging the primary charges aren’t for the logging feature itself but for the downstream services used for log storage and analysis. Check CloudWatch Pricing , Amazon S3 Pricing , Amazon Data Firehose Pricing , and&nbsp; Amazon Athena Pricing for individual pricing. Apart from the preceding costs, Route 53 Profile charges also apply. AWS designed the pricing model for maximum scalability and value, featuring a transparent, hourly, pay-as-you-go structure based on your Profile-VPC associations. Conclusion Integrating DNS query logging with Amazon Route 53 Profiles offers five key advantages. Route 53 Profiles revolutionizes Amazon Query Logging by replacing manual per-VPC configurations with a centralized management approach where settings automatically propagate to all associated VPCs. This integration significantly reduces operational overhead for network teams who can now define consistent logging policies once and apply them across their entire infrastructure regardless of scale. The solution also enables cross-account sharing of DNS configurations through AWS RAM, facilitating multi-account governance while streamlining compliance verification. Furthermore, organizations can remove the need for multiple manual configurations to minimize configuration drift risk and maintain uniform advanced settings across their growing AWS environment. This blog post showed how to set up DNS query logging using Route 53 Profile and offered guidance for organizations with the traditional architectures. We examined the difficulties associated with conventional solutions and walked through the detailed implementation process and recommended practices for incorporating DNS query logging with Route 53 Profiles. For additional information, check out these resources: Route 53 Profiles Amazon Route 53 Resolver Query Logging AWS Resource Access Manager About the authors Aanchal Agrawal Aanchal holds the position of Senior Technical Account Manager at AWS, where she specializes in Networking and Edge Security. Throughout her time at AWS, she has concentrated on aiding customers in effective cloud adoption. Leveraging her expertise in networking and edge security, she assists clients in constructing efficient and optimized cloud architectures. Anushree Shetty Anushree works as Senior Technical Account Manager at AWS. She specializes in Perimeter Protection and Edge services. She guides organizations through seamless AWS Edge migrations, crafting tailored cloud solutions that address specific business requirements and security needs. She consistently helps customers maximize the benefits of their cloud adoption, enhancing both their security posture and operational efficiency. Resources Networking Products Getting Started Amazon CloudFront Follow &nbsp;Twitter &nbsp;Facebook &nbsp;LinkedIn &nbsp;Twitch &nbsp;Email Updates Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs &amp; Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms <a data-rg-n="Link" href="#" data-rigel-analytics="{&quot;name&quot;:&quot;Link&quot;,&quot;properties&quot;:{&quot;size&quot;:2}}" class="rgft_8711ccd9 rgft_98b54368 rgft_13008707 rgft_27323f
2026-01-13T09:30:39
https://llvm.org/doxygen/classPODSmallVector.html#a377ddf1320dc6c97a5143ee43add19bc
LLVM: PODSmallVector&lt; T, N &gt; Class Template Reference LLVM &#160;22.0.0git Public Member Functions &#124; List of all members PODSmallVector&lt; T, N &gt; Class Template Reference #include &quot; llvm/Demangle/ItaniumDemangle.h &quot; Inheritance diagram for PODSmallVector&lt; T, N &gt;: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. [ legend ] Public Member Functions &#160; PODSmallVector () &#160; PODSmallVector ( const PODSmallVector &amp;)=delete PODSmallVector &amp;&#160; operator= ( const PODSmallVector &amp;)=delete &#160; PODSmallVector ( PODSmallVector &amp;&amp;Other) PODSmallVector &amp;&#160; operator= ( PODSmallVector &amp;&amp;Other) void&#160; push_back ( const T &amp;Elem) void&#160; pop_back () void&#160; shrinkToSize (size_t Index) T *&#160; begin () T *&#160; end () bool &#160; empty () const size_t&#160; size () const T &amp;&#160; back () T &amp;&#160; operator[] (size_t Index) void&#160; clear () &#160; ~PODSmallVector () Detailed Description template&lt;class T , size_t N&gt; class PODSmallVector&lt; T, N &gt; Definition at line 41 of file ItaniumDemangle.h . Constructor &amp; Destructor Documentation &#9670;&#160; PODSmallVector() [1/3] template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt; ::PODSmallVector ( ) inline Definition at line 77 of file ItaniumDemangle.h . &#9670;&#160; PODSmallVector() [2/3] template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt; ::PODSmallVector ( const PODSmallVector &lt; T , N &gt; &amp; ) delete &#9670;&#160; PODSmallVector() [3/3] template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt; ::PODSmallVector ( PODSmallVector &lt; T , N &gt; &amp;&amp; Other ) inline Definition at line 82 of file ItaniumDemangle.h . &#9670;&#160; ~PODSmallVector() template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt;::~ PODSmallVector ( ) inline Definition at line 156 of file ItaniumDemangle.h . Member Function Documentation &#9670;&#160; back() template&lt;class T , size_t N&gt; T &amp; PODSmallVector &lt; T , N &gt;::back ( ) inline Definition at line 146 of file ItaniumDemangle.h . &#9670;&#160; begin() template&lt;class T , size_t N&gt; T * PODSmallVector &lt; T , N &gt;::begin ( ) inline Definition at line 141 of file ItaniumDemangle.h . Referenced by PODSmallVector&lt; Node *, 8 &gt;::operator[]() . &#9670;&#160; clear() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::clear ( ) inline Definition at line 154 of file ItaniumDemangle.h . &#9670;&#160; empty() template&lt;class T , size_t N&gt; bool PODSmallVector &lt; T , N &gt;::empty ( ) const inline Definition at line 144 of file ItaniumDemangle.h . &#9670;&#160; end() template&lt;class T , size_t N&gt; T * PODSmallVector &lt; T , N &gt;::end ( ) inline Definition at line 142 of file ItaniumDemangle.h . &#9670;&#160; operator=() [1/2] template&lt;class T , size_t N&gt; PODSmallVector &amp; PODSmallVector &lt; T , N &gt;::operator= ( const PODSmallVector &lt; T , N &gt; &amp; ) delete &#9670;&#160; operator=() [2/2] template&lt;class T , size_t N&gt; PODSmallVector &amp; PODSmallVector &lt; T , N &gt;::operator= ( PODSmallVector &lt; T , N &gt; &amp;&amp; Other ) inline Definition at line 96 of file ItaniumDemangle.h . &#9670;&#160; operator[]() template&lt;class T , size_t N&gt; T &amp; PODSmallVector &lt; T , N &gt;::operator[] ( size_t Index ) inline Definition at line 150 of file ItaniumDemangle.h . &#9670;&#160; pop_back() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::pop_back ( ) inline Definition at line 131 of file ItaniumDemangle.h . &#9670;&#160; push_back() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::push_back ( const T &amp; Elem ) inline Definition at line 124 of file ItaniumDemangle.h . Referenced by AbstractManglingParser&lt; Derived, Alloc &gt;::parseTemplateParamDecl() . &#9670;&#160; shrinkToSize() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::shrinkToSize ( size_t Index ) inline Definition at line 136 of file ItaniumDemangle.h . &#9670;&#160; size() template&lt;class T , size_t N&gt; size_t PODSmallVector &lt; T , N &gt;::size ( ) const inline Definition at line 145 of file ItaniumDemangle.h . Referenced by PODSmallVector&lt; Node *, 8 &gt;::operator[]() , PODSmallVector&lt; Node *, 8 &gt;::push_back() , and PODSmallVector&lt; Node *, 8 &gt;::shrinkToSize() . The documentation for this class was generated from the following file: include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by&#160; 1.14.0
2026-01-13T09:30:39
https://llvm.org/doxygen/classPODSmallVector.html#a8788fe76b8b54f51458a31afa2486fd6
LLVM: PODSmallVector&lt; T, N &gt; Class Template Reference LLVM &#160;22.0.0git Public Member Functions &#124; List of all members PODSmallVector&lt; T, N &gt; Class Template Reference #include &quot; llvm/Demangle/ItaniumDemangle.h &quot; Inheritance diagram for PODSmallVector&lt; T, N &gt;: This browser is not able to show SVG: try Firefox, Chrome, Safari, or Opera instead. [ legend ] Public Member Functions &#160; PODSmallVector () &#160; PODSmallVector ( const PODSmallVector &amp;)=delete PODSmallVector &amp;&#160; operator= ( const PODSmallVector &amp;)=delete &#160; PODSmallVector ( PODSmallVector &amp;&amp;Other) PODSmallVector &amp;&#160; operator= ( PODSmallVector &amp;&amp;Other) void&#160; push_back ( const T &amp;Elem) void&#160; pop_back () void&#160; shrinkToSize (size_t Index) T *&#160; begin () T *&#160; end () bool &#160; empty () const size_t&#160; size () const T &amp;&#160; back () T &amp;&#160; operator[] (size_t Index) void&#160; clear () &#160; ~PODSmallVector () Detailed Description template&lt;class T , size_t N&gt; class PODSmallVector&lt; T, N &gt; Definition at line 41 of file ItaniumDemangle.h . Constructor &amp; Destructor Documentation &#9670;&#160; PODSmallVector() [1/3] template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt; ::PODSmallVector ( ) inline Definition at line 77 of file ItaniumDemangle.h . &#9670;&#160; PODSmallVector() [2/3] template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt; ::PODSmallVector ( const PODSmallVector &lt; T , N &gt; &amp; ) delete &#9670;&#160; PODSmallVector() [3/3] template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt; ::PODSmallVector ( PODSmallVector &lt; T , N &gt; &amp;&amp; Other ) inline Definition at line 82 of file ItaniumDemangle.h . &#9670;&#160; ~PODSmallVector() template&lt;class T , size_t N&gt; PODSmallVector &lt; T , N &gt;::~ PODSmallVector ( ) inline Definition at line 156 of file ItaniumDemangle.h . Member Function Documentation &#9670;&#160; back() template&lt;class T , size_t N&gt; T &amp; PODSmallVector &lt; T , N &gt;::back ( ) inline Definition at line 146 of file ItaniumDemangle.h . &#9670;&#160; begin() template&lt;class T , size_t N&gt; T * PODSmallVector &lt; T , N &gt;::begin ( ) inline Definition at line 141 of file ItaniumDemangle.h . Referenced by PODSmallVector&lt; Node *, 8 &gt;::operator[]() . &#9670;&#160; clear() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::clear ( ) inline Definition at line 154 of file ItaniumDemangle.h . &#9670;&#160; empty() template&lt;class T , size_t N&gt; bool PODSmallVector &lt; T , N &gt;::empty ( ) const inline Definition at line 144 of file ItaniumDemangle.h . &#9670;&#160; end() template&lt;class T , size_t N&gt; T * PODSmallVector &lt; T , N &gt;::end ( ) inline Definition at line 142 of file ItaniumDemangle.h . &#9670;&#160; operator=() [1/2] template&lt;class T , size_t N&gt; PODSmallVector &amp; PODSmallVector &lt; T , N &gt;::operator= ( const PODSmallVector &lt; T , N &gt; &amp; ) delete &#9670;&#160; operator=() [2/2] template&lt;class T , size_t N&gt; PODSmallVector &amp; PODSmallVector &lt; T , N &gt;::operator= ( PODSmallVector &lt; T , N &gt; &amp;&amp; Other ) inline Definition at line 96 of file ItaniumDemangle.h . &#9670;&#160; operator[]() template&lt;class T , size_t N&gt; T &amp; PODSmallVector &lt; T , N &gt;::operator[] ( size_t Index ) inline Definition at line 150 of file ItaniumDemangle.h . &#9670;&#160; pop_back() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::pop_back ( ) inline Definition at line 131 of file ItaniumDemangle.h . &#9670;&#160; push_back() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::push_back ( const T &amp; Elem ) inline Definition at line 124 of file ItaniumDemangle.h . Referenced by AbstractManglingParser&lt; Derived, Alloc &gt;::parseTemplateParamDecl() . &#9670;&#160; shrinkToSize() template&lt;class T , size_t N&gt; void PODSmallVector &lt; T , N &gt;::shrinkToSize ( size_t Index ) inline Definition at line 136 of file ItaniumDemangle.h . &#9670;&#160; size() template&lt;class T , size_t N&gt; size_t PODSmallVector &lt; T , N &gt;::size ( ) const inline Definition at line 145 of file ItaniumDemangle.h . Referenced by PODSmallVector&lt; Node *, 8 &gt;::operator[]() , PODSmallVector&lt; Node *, 8 &gt;::push_back() , and PODSmallVector&lt; Node *, 8 &gt;::shrinkToSize() . The documentation for this class was generated from the following file: include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by&#160; 1.14.0
2026-01-13T09:30:39
https://llvm.org/doxygen/DemangleConfig_8h.html#af989845e24678c452b9222afdac95e7f
LLVM: include/llvm/Demangle/DemangleConfig.h File Reference LLVM &#160;22.0.0git include llvm Demangle Macros DemangleConfig.h File Reference #include &quot;llvm/Config/llvm-config.h&quot; #include &lt;cassert&gt; Go to the source code of this file. Macros #define&#160; __has_feature (x) #define&#160; __has_cpp_attribute (x) #define&#160; __has_attribute (x) #define&#160; __has_builtin (x) #define&#160; DEMANGLE_GNUC_PREREQ (maj, min, patch) #define&#160; DEMANGLE_ATTRIBUTE_USED #define&#160; DEMANGLE_UNREACHABLE #define&#160; DEMANGLE_ATTRIBUTE_NOINLINE #define&#160; DEMANGLE_DUMP_METHOD &#160;&#160;&#160; DEMANGLE_ATTRIBUTE_NOINLINE DEMANGLE_ATTRIBUTE_USED #define&#160; DEMANGLE_FALLTHROUGH #define&#160; DEMANGLE_ASSERT (__expr, __msg) #define&#160; DEMANGLE_NAMESPACE_BEGIN &#160;&#160;&#160;namespace llvm { namespace itanium_demangle { #define&#160; DEMANGLE_NAMESPACE_END &#160;&#160;&#160;} } #define&#160; DEMANGLE_ABI &#160; DEMANGLE_ABI is the export/visibility macro used to mark symbols delcared in llvm/Demangle as exported when built as a shared library. Macro Definition Documentation &#9670;&#160; __has_attribute #define __has_attribute ( x ) Value: 0 Definition at line 30 of file DemangleConfig.h . &#9670;&#160; __has_builtin #define __has_builtin ( x ) Value: 0 Definition at line 34 of file DemangleConfig.h . &#9670;&#160; __has_cpp_attribute #define __has_cpp_attribute ( x ) Value: 0 Definition at line 26 of file DemangleConfig.h . &#9670;&#160; __has_feature #define __has_feature ( x ) Value: 0 Definition at line 22 of file DemangleConfig.h . &#9670;&#160; DEMANGLE_ABI #define DEMANGLE_ABI DEMANGLE_ABI is the export/visibility macro used to mark symbols delcared in llvm/Demangle as exported when built as a shared library. Definition at line 115 of file DemangleConfig.h . Referenced by llvm::ms_demangle::Node::output() , parse_discriminator() , and llvm::ms_demangle::Demangler::~Demangler() . &#9670;&#160; DEMANGLE_ASSERT #define DEMANGLE_ASSERT ( __expr , __msg &#160;) Value: assert ((__expr) &amp;&amp; (__msg)) assert assert(UImm &amp;&amp;(UImm !=~static_cast&lt; T &gt;(0)) &amp;&amp;&quot;Invalid immediate!&quot;) Definition at line 94 of file DemangleConfig.h . Referenced by OutputBuffer::back() , PODSmallVector&lt; Node *, 8 &gt;::back() , ExplicitObjectParameter::ExplicitObjectParameter() , SpecialSubstitution::getBaseName() , AbstractManglingParser&lt; Derived, Alloc &gt;::OperatorInfo::getSymbol() , OutputBuffer::insert() , PODSmallVector&lt; Node *, 8 &gt;::operator[]() , AbstractManglingParser&lt; Derived, Alloc &gt;::parseTemplateParam() , AbstractManglingParser&lt; Derived, Alloc &gt;::parseUnresolvedName() , PODSmallVector&lt; Node *, 8 &gt;::pop_back() , AbstractManglingParser&lt; Derived, Alloc &gt;::popTrailingNodeArray() , PODSmallVector&lt; Node *, 8 &gt;::shrinkToSize() , Node::visit() , and AbstractManglingParser&lt; Derived, Alloc &gt;::ScopedTemplateParamList::~ScopedTemplateParamList() . &#9670;&#160; DEMANGLE_ATTRIBUTE_NOINLINE #define DEMANGLE_ATTRIBUTE_NOINLINE Definition at line 69 of file DemangleConfig.h . &#9670;&#160; DEMANGLE_ATTRIBUTE_USED #define DEMANGLE_ATTRIBUTE_USED Definition at line 53 of file DemangleConfig.h . &#9670;&#160; DEMANGLE_DUMP_METHOD #define DEMANGLE_DUMP_METHOD&#160;&#160;&#160; DEMANGLE_ATTRIBUTE_NOINLINE DEMANGLE_ATTRIBUTE_USED Definition at line 73 of file DemangleConfig.h . Referenced by Node::dump() . &#9670;&#160; DEMANGLE_FALLTHROUGH #define DEMANGLE_FALLTHROUGH Definition at line 85 of file DemangleConfig.h . Referenced by AbstractManglingParser&lt; Derived, Alloc &gt;::parseType() . &#9670;&#160; DEMANGLE_GNUC_PREREQ #define DEMANGLE_GNUC_PREREQ ( maj , min , patch &#160;) Value: 0 Definition at line 46 of file DemangleConfig.h . &#9670;&#160; DEMANGLE_NAMESPACE_BEGIN #define DEMANGLE_NAMESPACE_BEGIN&#160;&#160;&#160;namespace llvm { namespace itanium_demangle { Definition at line 97 of file DemangleConfig.h . &#9670;&#160; DEMANGLE_NAMESPACE_END #define DEMANGLE_NAMESPACE_END&#160;&#160;&#160;} } Definition at line 98 of file DemangleConfig.h . &#9670;&#160; DEMANGLE_UNREACHABLE #define DEMANGLE_UNREACHABLE Definition at line 61 of file DemangleConfig.h . Referenced by demanglePointerCVQualifiers() , ExpandedSpecialSubstitution::getBaseName() , and AbstractManglingParser&lt; Derived, Alloc &gt;::parseExpr() . Generated on for LLVM by&#160; 1.14.0
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/debate/
TIME for Kids | Debate | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Debate Opinion Should Summer Reading Be Required? May 21, 2021 Reading is fun! But should it be required? Many kids are assigned books to read over the summer. Some people say summer reading keeps kids’ minds sharp. Others say kids need a break. TIME for Kids readers weigh in. Yes… Audio Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/business/
TIME for Kids | Business | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Business United States Down to Business December 20, 2023 How do you feel about cookies? They make people happy. That is why we buy them. Girl Scouts are pros at selling cookies. See how they run a successful cookie business. They spread the word. These Girl Scouts are… Audio Spanish Business Spend or Save? December 1, 2021 You can decide what to do with your money. Will you spend it? Or will you save it for later? The choice is yours. Read on to learn more. You can spend money. Use money to buy what you… Audio Spanish Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://nbviewer.jupyter.org/github/gestaltrevision/python_for_visres/blob/master/Part3/Part3_Scientific_Python.ipynb#Why-we-need-Numpy--
Jupyter Notebook Viewer Toggle navigation JUPYTER FAQ View as Code View on GitHub Execute on Binder Download Notebook python_for_visres Part3 Notebook Back to the main index Scientific Python: Transitioning from MATLAB to Python ¶ Part of the introductory series to using Python for Vision Research brought to you by the GestaltReVision group (KU Leuven, Belgium). This notebook is meant as an introduction to Python's essential scientific packages: Numpy, PIL, Matplotlib, and SciPy. There is more Python learning material available on our lab's wiki . Author: Maarten Demeyer Year: 2014 Copyright: Public Domain as in CC0 Contents ¶ A Quick Recap Data types Lists Functions Objects Numpy Why we need Numpy The ndarray data type shape and dtype Indexing and slicing Filling and manipulating arrays A few useful functions A small exercise A bit harder: The Gabor Boolean indexing Vectorizing a simulation PIL: the Python Imaging Library Loading and showing images Resizing, rotating, cropping and converting Advanced Saving Exercise Matplotlib Quick plots Saving to a file Visualizing arrays Multi-panel figures Exercise: Function plots Finer figure control Exercise: Add regression lines Scipy Statistics Fast Fourier Transform A Quick Recap ¶ Data types ¶ Depending on what kind of values you want to store, Python variables can be of different data types. For instance: In [ ]: my_int = 5 print my_int , type ( my_int ) my_float = 5.0 print my_float , type ( my_float ) my_boolean = False print my_boolean , type ( my_boolean ) my_string = 'hello' print my_string , type ( my_string ) Lists ¶ One useful data type is the list, which stores an ordered , mutable sequence of any data type , even mixed In [ ]: my_list = [ my_int , my_float , my_boolean , my_string ] print type ( my_list ) for element in my_list : print type ( element ) To retrieve or change specific elements in a list, indices and slicing can be used. Indexing starts at zero . Slices do not include the last element . In [ ]: print my_list [ 1 ] my_list [ 1 ] = 3.0 my_sublist = my_list [ 1 : 3 ] print my_sublist print type ( my_sublist ) Functions ¶ Re-usable pieces of code can be put into functions. Many pre-defined functions are available in Python packages. Functions can have both required and optional input arguments. When the function has no output argument, it returns None . In [ ]: # Function with a required and an optional argument def regress ( x , c = 0 , b = 1 ): return ( x * b ) + c print regress ( 5 ) # Only required argument print regress ( 5 , 10 , 3 ) # Use argument order print regress ( 5 , b = 3 ) # Specify the name to skip an optional argument In [ ]: # Function without return argument def divisible ( a , b ): if a % b : print str ( a ) + " is not divisible by " + str ( b ) else : print str ( a ) + " is divisible by " + str ( b ) divisible ( 9 , 3 ) res = divisible ( 9 , 2 ) print res In [ ]: # Function with multiple return arguments def add_diff ( a , b ): return a + b , a - b # Assigned as a tuple res = add_diff ( 5 , 3 ) print res # Directly unpacked to two variables a , d = add_diff ( 5 , 3 ) print a print d Objects ¶ Every variable in Python is actually an object. Objects bundle member variables with tightly connected member functions that (typically) use these member variables. Lists are a good example of this. In [ ]: my_list = [ 1 , False , 'boo' ] my_list . append ( 'extra element' ) my_list . remove ( False ) print my_list The member variables in this case just contain the information on the elements in the list. They are 'hidden' and not intended to be used directly - you manipulate the list through its member functions. The functions above are in-place methods, changing the original list directly, and returning None. This is not always the case. Some member functions, for instance in strings, do not modify the original object, but return a second, modified object instead. In [ ]: return_arg = my_list . append ( 'another one' ) print return_arg print my_list In [ ]: my_string = 'kumbaya, milord' return_arg = my_string . replace ( 'lord' , 'lard' ) print return_arg print my_string Do you remember why list functions are in-place, while string functions are not? Numpy ¶ Why we need Numpy ¶ While lists are great, they are not very suitable for scientific computing. Consider this example: In [ ]: subj_length = [ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ] subj_weight = [ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ] subj_bmi = [] # EXERCISE 1: Try to compute the BMI of each subject, as well as the average BMI across subjects # BMI = weight/(length/100)**2 Clearly, this is clumsy. MATLAB users would expect something like this to work: In [ ]: subj_bmi = subj_weight / ( subj_length / 100 ) ** 2 mean_bmi = mean ( subj_bmi ) But it doesn't. / and ** are not defined for lists; nor does the mean() function exist. + and * are defined, but they mean something else. Do you remember what they do? The ndarray data type ¶ Enter Numpy, and its ndarray data type, allowing these elementwise computations on ordered sequences, and implementing a host of mathematical functions operating on them. Lists are converted to Numpy arrays through calling the np.array() constructor function, which takes a list and creates a new array object filled with the list's values. In [ ]: import numpy as np # Create a numpy array from a list subj_length = np . array ([ 180.0 , 165.0 , 190.0 , 172.0 , 156.0 ]) subj_weight = np . array ([ 75.0 , 60.0 , 83.0 , 85.0 , 62.0 ]) print type ( subj_length ), type ( subj_weight ) # EXERCISE 2: Try to complete the program now! # Hint: np.mean() computes the mean of a numpy array # Note that unlike MATLAB, Python does not need the '.' before elementwise operators Numpy is a very large package, that we can't possibly cover completely. But we will cover enough to get you started. shape and dtype ¶ The most basic characteristics of a Numpy array are its shape and the data type of its elements, or dtype. For those of you who have worked in MATLAB before, this should be familiar. In [ ]: # Multi-dimensional lists are just nested lists # This is clumsy to work with my_nested_list = [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] print my_nested_list print len ( my_nested_list ) print my_nested_list [ 0 ] print len ( my_nested_list [ 0 ]) In [ ]: # Numpy arrays handle multidimensionality better arr = np . array ( my_nested_list ) print arr # nicer printing print arr . shape # direct access to all dimension sizes print arr . size # direct access to the total number of elements print arr . ndim # direct access to the number of dimensions The member variables shape and size contain the dimension lengths and the total number of elements, respectively, while ndim contains the number of dimensions. The shape is represented by a tuple, where the last dimension is the inner dimension representing the columns of a 2-D matrix. The first dimension is the top-level, outer dimension and represents the rows here. We could also make 3-D (or even higher-level) arrays: In [ ]: arr3d = np . array ([ [[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]] , [[ 7 , 8 , 9 ],[ 10 , 11 , 12 ]] ]) print arr3d print arr3d . shape print arr3d . size print arr3d . ndim Now the last or inner dimension becomes the layer dimension. The inner lists of the constructor represent the values at that (row,column) coordinate of the various layers. Rows and columns remain the first two dimensions. Note how what we have here now, is three layers of two-by-two matrices. Not two layers of two-by-three matrices. This implies that dimension sizes are listed from low to high in the shape tuple. The second basic property of an array is its dtype . Contrary to list elements, numpy array elements are (typically) all of the same type. In [ ]: # The type of a numpy array is always... numpy.ndarray arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) print type ( arr ) # So, let's do a computation print arr / 2 # Apparently we're doing our computations on integer elements! # How do we find out? print arr . dtype In [ ]: # And how do we fix this? arr = arr . astype ( 'float' ) # Note: this is not an in-place function! print arr . dtype print arr / 2 In [ ]: # Alternatively, we could have defined our dtype better from the start arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]], dtype = 'float' ) print arr . dtype arr = np . array ([[ 1. , 2. , 3. ],[ 4. , 5. , 6. ]]) print arr . dtype To summarize, any numpy array is of the data type numpy.ndarray , but the data type of its elements can be set separately as its dtype member variable. It's a good idea to explicitly define the dtype when you create the array. Indexing and slicing ¶ The same indexing and slicing operations used on lists can also be used on Numpy arrays. It is possible to perform computations on slices directly. But pay attention - Numpy arrays must have an identical shape if you want to combine them. There are some exceptions though, the most common being scalar operands. In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # Indexing and slicing print arr [ 0 , 0 ] # or: arr[0][0] print arr [: - 1 , 0 ] In [ ]: # Elementwise computations on slices # Remember, the LAST dimension is the INNER dimension print arr [:, 0 ] * arr [:, 1 ] print arr [ 0 ,:] * arr [ 1 ,:] # Note that you could never slice across rows like this in a nested list! In [ ]: # This doesn't work # print arr[1:,0] * arr[:,1] # And here's why: print arr [ 1 :, 0 ] . shape , arr [:, 1 ] . shape In [ ]: # This however does work. You can always use scalars as the other operand. print arr [:, 0 ] * arr [ 2 , 2 ] # Or, similarly: print arr [:, 0 ] * 9. As an exercise , can you create a 2x3 array containing the column-wise and the row-wise means of the original matrix, respectively? Without using a for-loop. In [ ]: # EXERCISE 3: Create a 2x3 array containing the column-wise and the row-wise means of the original matrix # Do not use a for-loop, and also do not use the np.mean() function for now. arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) This works, but it is still a bit clumsy. We will learn more efficient methods below. Filling and manipulating arrays ¶ Creating arrays mustn't always be done by hand. The following functions are particularly common. Again, they are analogous to what you do in MATLAB. In [ ]: # 1-D array, filled with zeros arr = np . zeros ( 3 ) print arr # Multidimensional array of a given shape, filled with ones # This automatically allows you to fill arrays with /any/ value arr = np . ones (( 3 , 2 )) * 5 print arr # Sequence from 1 to AND NOT including 16, in steps of 3 # Note that using a float input makes the dtype a float as well # This is equivalent to np.array(range(1.,16.,3)) arr = np . arange ( 1. , 16. , 3 ) print arr # Sequence from 1 to AND including 16, in 3 steps # This always returns an array with dtype float arr = np . linspace ( 1 , 16 , 3 ) print arr In [ ]: # Array of random numbers between 0 and 1, of a given shape # Note that the inputs here are separate integers, not a tuple arr = np . random . rand ( 5 , 2 ) print arr # Array of random integers from 0 to AND NOT including 10, of a given shape # Here the shape is defined as a tuple again arr = np . random . randint ( 0 , 10 ,( 5 , 2 )) print arr Once we have an array, we may wish to replicate it to create a larger array. Here the concept of an axis becomes important, i.e., along which of the dimensions of the array are you working? axis=0 corresponds to the first dimension of the shape tuple, axis=-1 always corresponds to the last dimension (inner dimension; columns in case of 2D, layers in case of 3D). In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) print arr0 # 'repeat' replicates elements along a given axis # Each element is replicated directly after itself arr = np . repeat ( arr0 , 3 , axis =- 1 ) print arr # We may even specify the number of times each element should be repeated # The length of the tuple should correspond to the dimension length arr = np . repeat ( arr0 , ( 2 , 4 ), axis = 0 ) print arr In [ ]: print arr0 # 'tile' replicates the array as a whole # Use a tuple to specify the number of tilings along each dimensions arr = np . tile ( arr0 , ( 2 , 4 )) print arr In [ ]: # 'meshgrid' is commonly used to create X and Y coordinate arrays from two vectors # where each array contains the X or Y coordinates corresponding to a given pixel in an image x = np . arange ( 10 ) y = np . arange ( 5 ) print x , y arrx , arry = np . meshgrid ( x , y ) print arrx print arry Concatenating an array allows you to make several arrays into one. In [ ]: arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # 'concatenate' requires an axis to perform its operation on # The original arrays should be put in a tuple arr = np . concatenate (( arr0 , arr1 ), axis = 0 ) print arr # as new rows arr = np . concatenate (( arr0 , arr1 ), axis = 1 ) print arr # as new columns In [ ]: # Suppose we want to create a 3-D matrix from them, # we have to create them as being three-dimensional # (what happens if you don't?) arr0 = np . array ([[[ 1 ],[ 2 ]],[[ 3 ],[ 4 ]]]) arr1 = np . array ([[[ 5 ],[ 6 ]],[[ 7 ],[ 8 ]]]) print arr0 . shape , arr1 . shape arr = np . concatenate (( arr0 , arr1 ), axis = 2 ) print arr In [ ]: # hstack, vstack, and dstack are short-hand functions # which will automatically create these 'missing' dimensions arr0 = np . array ([[ 1 , 2 ],[ 3 , 4 ]]) arr1 = np . array ([[ 5 , 6 ],[ 7 , 8 ]]) # vstack() concatenates rows arr = np . vstack (( arr0 , arr1 )) print arr # hstack() concatenates columns arr = np . hstack (( arr0 , arr1 )) print arr # dstack() concatenates 2D arrays into 3D arrays arr = np . dstack (( arr0 , arr1 )) print arr In [ ]: # Their counterparts are the hsplit, vsplit, dsplit functions # They take a second argument: how do you want to split arr = np . random . rand ( 4 , 4 ) print arr print '--' # Splitting int equal parts arr0 , arr1 = hsplit ( arr , 2 ) print arr0 print arr1 print '--' # Or, specify exact split points arr0 , arr1 , arr2 = hsplit ( arr ,( 1 , 2 )) print arr0 print arr1 print arr2 Finally, we can easily reshape and transpose arrays In [ ]: arr0 = np . arange ( 10 ) print arr0 print '--' # 'reshape' does exactly what you would expect # Make sure though that the total number of elements remains the same arr = np . reshape ( arr0 ,( 5 , 2 )) print arr # You can also leave one dimension blank by using -1 as a value # Numpy will then compute for you how long this dimension should be arr = np . reshape ( arr0 ,( - 1 , 5 )) print arr print '--' # 'transpose' allows you to switch around dimensions # A tuple specifies the new order of dimensions arr = np . transpose ( arr ,( 1 , 0 )) print arr # For simply transposing rows and columns, there is the short-hand form .T arr = arr . T print arr print '--' # 'flatten' creates a 1D array out of everything arr = arr . flatten () print arr Time for an exercise! Can you write your own 'meshgrid3d' function, which returns the resulting 2D arrays as two layers of a 3D matrix, instead of two separate 2D arrays? In [ ]: # EXERCISE 4: Create your own meshgrid3d function # Like np.meshgrid(), it should take two vectors and replicate them; one into columns, the other into rows # Unlike np.meshgrid(), it should return them as a single 3D array rather than 2D arrays # ...do not use the np.meshgrid() function def meshgrid3d ( xvec , yvec ): # fill in! xvec = np . arange ( 10 ) yvec = np . arange ( 5 ) xy = meshgrid3d ( xvec , yvec ) print xy print xy [:,:, 0 ] # = first output of np.meshgrid() print xy [:,:, 1 ] # = second output of np.meshgrid() A few useful functions ¶ We can now handle arrays in any way we like, but we still don't know any operations to perform on them, other than the basic arithmetic operations. Luckily numpy implements a large collection of common computations. This is a very short review of some useful functions. In [ ]: arr = np . random . rand ( 5 ) print arr # Sorting and shuffling res = arr . sort () print arr # in-place!!! print res res = np . random . shuffle ( arr ) print arr # in-place!!! print res In [ ]: # Min, max, mean, standard deviation arr = np . random . rand ( 5 ) print arr mn = np . min ( arr ) mx = np . max ( arr ) print mn , mx mu = np . mean ( arr ) sigma = np . std ( arr ) print mu , sigma In [ ]: # Some functions allow you to specify an axis to work along, in case of multidimensional arrays arr2d = np . random . rand ( 3 , 5 ) print arr2d print np . mean ( arr2d , axis = 0 ) print np . mean ( arr2d , axis = 1 ) In [ ]: # Trigonometric functions # Note: Numpy works with radians units, not degrees arr = np . random . rand ( 5 ) print arr sn = np . sin ( arr * 2 * np . pi ) cs = np . cos ( arr * 2 * np . pi ) print sn print cs In [ ]: # Exponents and logarithms arr = np . random . rand ( 5 ) print arr xp = np . exp ( arr ) print xp print np . log ( xp ) In [ ]: # Rounding arr = np . random . rand ( 5 ) print arr print arr * 5 print np . round ( arr * 5 ) print np . floor ( arr * 5 ) print np . ceil ( arr * 5 ) A complete list of all numpy functions can be found at the Numpy website . Or, a google search for 'numpy tangens', 'numpy median' or similar will usually get you there as well. A small exercise ¶ Remember how you were asked to create a 2x3 array containing the column-wise and the row-wise means of a matrix above? We now have the knowledge to do this far shorter. Use a concatenation function and a statistical function to obtain the same thing! In [ ]: # EXERCISE 5: Make a better version of Exercise 3 with what you've just learned arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ],[ 7 , 8 , 9 ]], dtype = 'float' ) # What we had: print np . array ([( arr [:, 0 ] + arr [:, 1 ] + arr [:, 2 ]) / 3 ,( arr [ 0 ,:] + arr [ 1 ,:] + arr [ 2 ,:]) / 3 ]) # Now the new version: A bit harder: The Gabor ¶ A Gabor patch is the product of a sinusoidal grating and a Gaussian. If we ignore orientation and just create a vertically oriented Gabor, the grating luminance (bounded between -1 and 1) is created by: $grating = \sin(xf)$ where $x$ is the $x$ coordinate of a pixel, and $f$ is the frequency of the sine wave (how many peaks per $2 \pi$ coordinate units). A simple 2D Gaussian luminance profile (bounded between 0 and 1) with its peak at coordinate $(0,0)$ and a variance of $1$ is given by: $gaussian = e^{-(x^2+y^2)/2}$ where $x$ and $y$ are again the $x$ and $y$ coordinates of a pixel. The Gabor luminance (bounded between -1 and 1) for any pixel then equals: $gabor = grating \times gaussian$ To visualize this, these are the grating, the Gaussian, and the Gabor, respectively (at maximal contrast): Now you try to create a 100x100 pixel image of a Gabor. Use $x$ and $y$ coordinate values ranging from $-\pi$ to $\pi$, and a frequency of 10 for a good-looking result. In [ ]: # EXERCISE 6: Create a Gabor patch of 100 by 100 pixels import numpy as np import matplotlib.pyplot as plt # Step 1: Define the 1D coordinate values # Tip: use 100 equally spaced values between -np.pi and np.pi # Step 2: Create the 2D x and y coordinate arrays # Tip: use np.meshgrid() # Step 3: Create the grating # Tip: Use a frequency of 10 # Step 4: Create the Gaussian # Tip: use np.exp() to compute a power of e # Step 5: Create the Gabor # Visualize your result # (we will discuss how this works later) plt . figure ( figsize = ( 15 , 5 )) plt . subplot ( 131 ) plt . imshow ( grating , cmap = 'gray' ) plt . subplot ( 132 ) plt . imshow ( gaussian , cmap = 'gray' ) plt . subplot ( 133 ) plt . imshow ( gabor , cmap = 'gray' ) plt . show () Boolean indexing ¶ The dtype of a Numpy array can also be boolean, that is, True or False . It is then particularly convenient that given an array of the same shape, these boolean arrays can be used to index other arrays . In [ ]: # Check whether each element of a 2x2 array is greater than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr &gt; 0.5 print res print '--' # Analogously, check it against each element of a second 2x2 array arr2 = np . random . rand ( 2 , 2 ) print arr2 res = arr &gt; arr2 print res In [ ]: # We can use these boolean arrays as indices into other arrays! # Add 0.5 to any element smaller than 0.5 arr = np . random . rand ( 2 , 2 ) print arr res = arr &lt; 0.5 print res arr [ res ] = arr [ res ] + 0.5 print arr # Or, shorter: arr [ arr &lt; 0.5 ] = arr [ arr &lt; 0.5 ] + 0.5 # Or, even shorter: arr [ arr &lt; 0.5 ] += 0.5 While it is possible to do multiplication and addition on boolean values (this will convert them to ones and zeros), the proper way of doing elementwise boolean logic is to use boolean operators : and, or, xor, not . In [ ]: arr = np . array ([[ 1 , 2 , 3 ],[ 4 , 5 , 6 ]]) # The short-hand forms for elementwise boolean operators are: &amp; | ~ ^ # Use parentheses around such expressions res = ( arr &lt; 4 ) &amp; ( arr &gt; 1 ) print res print '--' res = ( arr &lt; 2 ) | ( arr == 5 ) print res print '--' res = ( arr &gt; 3 ) &amp; ~ ( arr == 6 ) print res print '--' res = ( arr &gt; 3 ) ^ ( arr &lt; 5 ) print res In [ ]: # To convert boolean indices to normal integer indices, use the 'nonzero' function print res print np . nonzero ( res ) print '--' # Separate row and column indices print np . nonzero ( res )[ 0 ] print np . nonzero ( res )[ 1 ] print '--' # Or stack and transpose them to get index pairs pairs = np . vstack ( np . nonzero ( res )) . T print pairs Vectorizing a simulation ¶ Numpy is excellent at making programs that involve iterative operations more efficient. This then requires you to re-imagine the problem as an array of values, rather than values that change with each loop iteration. For instance, imagine the following situation: You throw a die continuously until you either encounter the sequence ‘123’ or ‘111’. Which one can be expected to occur sooner? This could be proven mathematically, but in practice it is often faster to do a simulation instead of working out an analytical solution. We could just use two nested for-loops: In [ ]: import numpy as np # We will keep track of the sum of first occurence positions, # as well as the number of positions entered into this sum. # This way we can compute the mean. sum111 = 0. n111 = 0. sum123 = 0. n123 = 0. for sim in range ( 5000 ): # Keep track of how far along we are in finding a given pattern d111 = 0 d123 = 0 for throw in range ( 2000 ): # Throw a die die = np . random . randint ( 1 , 7 ) # 111 case if d111 == 3 : pass elif die == 1 and d111 == 0 : d111 = 1 elif die == 1 and d111 == 1 : d111 = 2 elif die == 1 and d111 == 2 : d111 = 3 sum111 = sum111 + throw n111 = n111 + 1 else : d111 = 0 # 123 case if d123 == 3 : pass elif die == 1 : d123 = 1 elif die == 2 and d123 == 1 : d123 = 2 elif die == 3 and d123 == 2 : d123 = 3 sum123 = sum123 + throw n123 = n123 + 1 else : d123 = 0 # Don't continue if both have been found if d111 == 3 and d123 == 3 : break # Compute the averages avg111 = sum111 / n111 avg123 = sum123 / n123 print avg111 , avg123 # ...can you spot the crucial difference between both patterns? However this is inefficient and makes the code unwieldy. Vectorized solutions are usually preferred. Try to run these 5000 simulations using Numpy, without any loops , and see whether the result is the same. Use a maximal die-roll sequence length of 2000, and just assume that both '123' and '111' will occur before the end of any sequence. You will have to make use of 2D arrays and boolean logic. A quick solution to find the first occurence in a boolean array is to use argmax - use the only Numpy documentation to find out how to use it. Vectorizing problems is a crucial skill in scientific computing! In [ ]: # EXERCISE 7: Vectorize the above program # You get these lines for free... import numpy as np throws = np . random . randint ( 1 , 7 ,( 5000 , 2000 )) one = ( throws == 1 ) two = ( throws == 2 ) three = ( throws == 3 ) # Find out where all the 111 and 123 sequences occur find111 = find123 = # Then at what index they /first/ occur in each sequence first111 = first123 = # Compute the average first occurence location for both situations avg111 = avg123 = # Print the result print avg111 , avg123 In this particular example, the nested for-loop solution does have the advantage that it can 'break' out of the die throwing sequence when first occurences of both patterns have been found, whereas Numpy will always generate complete sequences of 2000 rolls. Remove the break statement in the first solution to see what the speed difference would have been if both programs were truly doing the same thing! PIL: the Python Imaging Library ¶ As vision scientists, images are a natural stimulus to work with. The Python Imaging Library will help us handle images, similar to the Image Processing toolbox in MATLAB. Note that PIL itself has nowadays been superseded by Pillow , for which an excellent documentation can be found here . The module to import is however still called 'PIL'. In practice, we will mostly use its Image module. In [ ]: from PIL import Image Loading and showing images ¶ The image we will use for this example code should be in the same directory as this file. But really, any color image will do, as long as you put it in the same directory as this notebook, and change the filename string in the code to correspond with the actual image filename. In [ ]: # Opening an image is simple enough: # Construct an Image object with the filename as an argument im = Image . open ( 'python.jpg' ) # It is now represented as an object of the 'JpegImageFile' type print im # There are some useful member variables we can inspect here print im . format # format in which the file was saved print im . size # pixel dimensions print im . mode # luminance/color model used # We can even display it # NOTE this is not perfect; meant for debugging im . show () If the im.show() call does not work well on your system, use this function instead to show images in a separate window. Note, you must always close the window before you can continue using the notebook. ( Tkinter is a package to write graphical user interfaces in Python, we will not discuss it here) In [ ]: # Alternative quick-show method from Tkinter import Tk , Button from PIL import ImageTk def alt_show ( im ): win = Tk () tkimg = ImageTk . PhotoImage ( im ) Button ( image = tkimg ) . pack () win . mainloop () alt_show ( im ) Once we have opened the image in PIL, we can convert it to a Numpy object. In [ ]: # We can convert PIL images to an ndarray! arr = np . array ( im ) print arr . dtype # uint8 = unsigned 8-bit integer (values 0-255 only) print arr . shape # Why do we have three layers? # Let's make it a float-type for doing computations arr = arr . astype ( 'float' ) print arr . dtype # This opens up unlimited possibilities for image processing! # For instance, let's make this a grayscale image, and add white noise max_noise = 50 arr = np . mean ( arr , - 1 ) noise = ( np . random . rand ( arr . shape [ 0 ], arr . shape [ 1 ]) - 0.5 ) * 2 arr = arr + noise * max_noise # Make sure we don't exceed the 0-255 limits of a uint8 arr [ arr &lt; 0 ] = 0 arr [ arr &gt; 255 ] = 255 The conversion back to PIL is easy as well In [ ]: # When going back to PIL, it's a good idea to explicitly # specify the right dtype and the mode. # Because automatic conversions might mess things up arr = arr . astype ( 'uint8' ) imn = Image . fromarray ( arr , mode = 'L' ) print imn . format print imn . size print imn . mode # L = greyscale imn . show () # or use alt_show() from above if show() doesn't work well for you # Note that /any/ 2D or 2Dx3 numpy array filled with values between 0 and 255 # can be converted to an image object in this way Resizing, rotating, cropping and converting ¶ The main operations of the PIL Image module you will probably use, are its resizing and conversion capabilities. In [ ]: im = Image . open ( 'python.jpg' ) # Make the image smaller ims = im . resize (( 800 , 600 )) ims . show () # Or you could even make it larger # The resample argument allows you to specify the method used iml = im . resize (( 1280 , 1024 ), resample = Image . BILINEAR ) iml . show () In [ ]: # Rotation is similar (unit=degrees) imr = im . rotate ( 10 , resample = Image . BILINEAR , expand = False ) imr . show () # If we want to lose the black corners, we can crop (unit=pixels) imr = imr . crop (( 100 , 100 , 924 , 668 )) imr . show () In [ ]: # 'convert' allows conversion between different color models # The most important here is between 'L' (luminance) and 'RGB' (color) imbw = im . convert ( 'L' ) imbw . show () print imbw . mode imrgb = imbw . convert ( 'RGB' ) imrgb . show () print imrgb . mode # Note that the grayscale conversion of PIL is more sophisticated # than simply averaging the three layers in Numpy (it is a weighted average) # Also note that the color information is effectively lost after converting to L Advanced ¶ The ImageFilter module implements several types of filters to execute on any image. You can also define your own. In [ ]: from PIL import Image , ImageFilter im = Image . open ( 'python.jpg' ) imbw = im . convert ( 'L' ) # Contour detection filter imf = imbw . filter ( ImageFilter . CONTOUR ) imf . show () # Blurring filter imf = imbw . filter ( ImageFilter . GaussianBlur ( radius = 3 )) imf . show () Similarly, you can import the ImageDraw module to draw shapes and text onto an image. In [ ]: from PIL import Image , ImageDraw im = Image . open ( 'python.jpg' ) # You need to attach a drawing object to the image first imd = ImageDraw . Draw ( im ) # Then you work on this object imd . rectangle ([ 10 , 10 , 100 , 100 ], fill = ( 255 , 0 , 0 )) imd . line ([( 200 , 200 ),( 200 , 600 )], width = 10 , fill = ( 0 , 0 , 255 )) imd . text ([ 500 , 500 ], 'Python' , fill = ( 0 , 255 , 0 )) # The results are automatically applied to the Image object im . show () Saving ¶ Finally, you can of course save these image objects back to a file on the disk. In [ ]: # PIL will figure out the file type by the extension im . save ( 'python.bmp' ) # There are also further options, like compression quality (0-100) im . save ( 'python_bad.jpg' , quality = 5 ) Exercise ¶ We mentioned that the conversion to grayscale in PIL is not just a simple averaging of the RGB layers. Can you visualize as an image what the difference in result looks like, when comparing a simple averaging to a PIL grayscale conversion? Pixels that are less luminant in the plain averaging method should be displayed in red, with a luminance depending on the size of the difference. Pixels that are more luminant when averaging in Numpy should similarly be displayed in green. Hint: you will have to make use of Boolean indexing. As an extra, try to maximize the contrast in your image, so that all values from 0-255 are used. As a second extra, save the result as PNG files of three different sizes (large, medium, small), at respectively the full image resolution, half of the image size, and a quarter of the image size. In [ ]: # EXERCISE 8: Visualize the difference between the PIL conversion to grayscale, and a simple averaging of RGB # Display pixels where the average is LESS luminant in red, and where it is MORE luminant in shades green # The luminance of these colors should correspond to the size of the difference # # Extra 1: Maximize the overall contrast in your image # # Extra 2: Save as three PNG files, of different sizes (large, medium, small) Matplotlib ¶ While PIL is useful for processing photographic images, it falls short for creating data plots and other kinds of schematic figures. Matplotlib offers a far more advanced solution for this, specifically through its pyplot module. Quick plots ¶ Common figures such as scatter plots, histograms and barcharts can be generated and manipulated very simply. In [ ]: import numpy as np from PIL import Image import matplotlib.pyplot as plt # As data for our plots, we will use the pixel values of the image # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () In [ ]: # QUICKPLOT 1: Correlation of luminances in the image # This works if you want to be very quick: # (xb means blue crosses, .g are green dots) plt . plot ( R , B , 'xb' ) plt . plot ( R , G , '.g' ) In [ ]: # However we will take a slightly more disciplined approach here # Note that Matplotlib wants colors expressed as 0-1 values instead of 0-255 # Create a square figure plt . figure ( figsize = ( 5 , 5 )) # Plot both scatter clouds # marker: self-explanatory # linestyle: 'None' because we want no line # color: RGB triplet with values 0-1 plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Make the axis scales equal, and name them plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Show the result plt . show () In [ ]: # QUICKPLOT 2: Histogram of 'red' values in the image plt . hist ( R ) In [ ]: # ...and now a nicer version # Make a non-square figure plt . figure ( figsize = ( 7 , 5 )) # Make a histogram with 25 red bins # Here we simply use the abbreviation 'r' for red plt . hist ( R , bins = 25 , color = 'r' ) # Set the X axis limits and label plt . xlim ([ 0 , 255 ]) plt . xlabel ( 'Red value' , size = 16 ) # Remove the Y ticks and labels by setting them to an empty list plt . yticks ([]) # Remove the top ticks by specifying the 'top' argument plt . tick_params ( top = False ) # Add two vertical lines for the mean and the median plt . axvline ( np . mean ( R ), color = 'g' , linewidth = 3 , label = 'mean' ) plt . axvline ( np . median ( R ), color = 'b' , linewidth = 1 , linestyle = ':' , label = 'median' ) # Generate a legend based on the label= arguments plt . legend ( loc = 2 ) # Show the plot plt . show () In [ ]: # QUICKPLOT 3: Bar chart of mean+std of RGB values plt . bar ([ 0 , 1 , 2 ],[ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )]) In [ ]: # ...and now a nicer version # Make a non-square-figure plt . figure ( figsize = ( 7 , 5 )) # Plot the bars with various options # x location where bars start, y height of bars # yerr: data for error bars # width: width of the bars # color: surface color of bars # ecolor: color of error bars ('k' means black) plt . bar ([ 0 , 1 , 2 ], [ np . mean ( R ), np . mean ( G ), np . mean ( B )], yerr = [ np . std ( R ), np . std ( G ), np . std ( B )], width = 0.75 , color = [ 'r' , 'g' , 'b' ], ecolor = 'k' ) # Set the X-axis limits and tick labels plt . xlim (( - 0.25 , 3. )) plt . xticks ( np . array ([ 0 , 1 , 2 ]) + 0.75 / 2 , [ 'Red' , 'Green' , 'Blue' ], size = 16 ) # Remove all X-axis ticks by setting their length to 0 plt . tick_params ( length = 0 ) # Set a figure title plt . title ( 'RGB Color Channels' , size = 16 ) # Show the figure plt . show () A full documentation of all these pyplot commands and options can be found here . If you use Matplotlib, you will be consulting this page a lot! Saving to a file ¶ Saving to a file is easy enough, using the savefig() function. However, there are some caveats, depending on the exact environment you are using. You have to use it BEFORE calling plt.show() and, in case of this notebook, within the same codebox. The reason for this is that Matplotlib is automatically deciding for you which plot commands belong to the same figure based on these criteria. In [ ]: # So, copy-paste this line into the box above, before the plt.show() command plt . savefig ( 'bar.png' ) # There are some further formatting options possible, e.g. plt . savefig ( 'bar.svg' , dpi = 300 , bbox_inches = ( 'tight' ), pad_inches = ( 1 , 1 ), facecolor = ( 0.8 , 0.8 , 0.8 )) Visualizing arrays ¶ Like PIL, Matplotlib is capable of displaying the contents of 2D Numpy arrays. The primary method is imshow() In [ ]: # A simple grayscale luminance map # cmap: colormap used to display the values plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ), cmap = 'gray' ) plt . show () # Importantly and contrary to PIL, imshow luminances are by default relative # That is, the values are always rescaled to 0-255 first (maximum contrast) # Moreover, colormaps other than grayscale can be used plt . figure ( figsize = ( 5 , 5 )) plt . imshow ( np . mean ( arr , 2 ) + 100 , cmap = 'jet' ) # or hot, hsv, cool,... plt . show () # as you can see, adding 100 didn't make a difference here Multi-panel figures ¶ As we noted, Matplotlib is behind the scenes keeping track of what your current figure is. This is often convenient, but in some cases you want to keep explicit control of what figure you're working on. For this, we will have to make a distinction between Figure and Axes objects. In [ ]: # 'Figure' objects are returned by the plt.figure() command fig = plt . figure ( figsize = ( 7 , 5 )) print type ( fig ) # Axes objects are the /actual/ plots within the figure # Create them using the add_axes() method of the figure object # The input coordinates are relative (left, bottom, width, height) ax0 = fig . add_axes ([ 0.1 , 0.1 , 0.4 , 0.7 ], xlabel = 'The X Axis' ) ax1 = fig . add_axes ([ 0.2 , 0.2 , 0.5 , 0.2 ], axisbg = 'gray' ) ax2 = fig . add_axes ([ 0.4 , 0.5 , 0.4 , 0.4 ], projection = 'polar' ) print type ( ax0 ), type ( ax1 ), type ( ax2 ) # This allows you to execute functions like savefig() directly on the figure object # This resolves Matplotlib's confusion of what the current figure is, when using plt.savefig() fig . savefig ( 'fig.png' ) # It also allows you to add text to the figure as a whole, across the different axes objects fig . text ( 0.5 , 0.5 , 'splatter' , color = 'r' ) # The overall figure title can be set separate from the individual plot titles fig . suptitle ( 'What a mess' , size = 18 ) # show() is actually a figure method as well # It just gets 'forwarded' to what is thought to be the current figure if you use plt.show() fig . show () For a full list of the Figure methods and options, go here . In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 10 )) # As we saw, many of the axes properties can already be set at their creation ax0 = fig . add_axes ([ 0. , 0. , 0.25 , 0.25 ], xticks = ( 0.1 , 0.5 , 0.9 ), xticklabels = ( 'one' , 'thro' , 'twee' )) ax1 = fig . add_axes ([ 0.3 , 0. , 0.25 , 0.25 ], xscale = 'log' , ylim = ( 0 , 0.5 )) ax2 = fig . add_axes ([ 0.6 , 0. , 0.25 , 0.25 ]) # Once you have the axes object though, there are further methods available # This includes many of the top-level pyplot functions # If you use for instance plt.plot(), Matplotlib is actually 'forwarding' this # to an Axes.plot() call on the current Axes object R . sort () G . sort () B . sort () ax2 . plot ( R , color = 'r' , linestyle = '-' , marker = 'None' ) # plot directly to an Axes object of choice plt . plot ( G , color = 'g' , linestyle = '-' , marker = 'None' ) # plt.plot() just plots to the last created Axes object ax2 . plot ( B , color = 'b' , linestyle = '-' , marker = 'None' ) # Other top-level pyplot functions are simply renamed to 'set_' functions here ax1 . set_xticks ([]) plt . yticks ([]) # Show the figure fig . show () The full methods and options of Axes can be found here . Clearly, when making a multi-panel figure, we are actually creating a single Figure object with multiple Axes objects attached to it. Having to set the Axes sizes manually is annoying though. Luckily, the subplot() method can handle much of this automatically. In [ ]: # Create a new figure fig = plt . figure ( figsize = ( 15 , 5 )) # Specify the LAYOUT of the subplots (rows,columns) # as well as the CURRENT Axes you want to work on ax0 = fig . add_subplot ( 231 ) # Equivalent top-level call on the current figure # It is also possible to create several subplots at once using plt.subplots() ax1 = plt . subplot ( 232 ) # Optional arguments are similar to those of add_axes() ax2 = fig . add_subplot ( 233 , title = 'three' ) # We can use these Axes object as before ax3 = fig . add_subplot ( 234 ) ax3 . plot ( R , 'r-' ) ax3 . set_xticks ([]) ax3 . set_yticks ([]) # We skipped the fifth subplot, and create only the 6th ax5 = fig . add_subplot ( 236 , projection = 'polar' ) # We can adjust the spacings afterwards fig . subplots_adjust ( hspace = 0.4 ) # And even make room in the figure for a plot that doesn't fit the grid fig . subplots_adjust ( right = 0.5 ) ax6 = fig . add_axes ([ 0.55 , 0.1 , 0.3 , 0.8 ]) # Show the figure fig . show () Exercise: Function plots ¶ Create a figure with a 2:1 aspect ratio, containing two subplots, one above the other. The TOP figure should plot one full cycle of a sine wave, that is $y=sin(x)$. Use $0$ to $2\pi$ as values on the X axis. On the same scale, the BOTTOM figure should plot $y=sin(x^2)$ instead. Tweak your figure until you think it looks good. In [ ]: # EXERCISE 9: Plot y=sin(x) and y=sin(x^2) in two separate subplots, one above the other # Let x range from 0 to 2*pi Finer figure control ¶ If you are not satisfied with the output of these general plotting functions, despite all the options they offer, you can start fiddling with the details manually. First, many figure elements can be manually added through top-level or Axes functions: In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Add horizontal lines ax0 . axhline ( 0 , color = 'g' ) ax0 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax0 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( 0 , color = 'g' ) ax1 . axhline ( 0.5 , color = 'gray' , linestyle = ':' ) ax1 . axhline ( - 0.5 , color = 'gray' , linestyle = ':' ) # Add text to the plots ax0 . text ( 0.1 , - 0.9 , '$y = sin(x)$' , size = 16 ) # math mode for proper formula formatting! ax1 . text ( 0.1 , - 0.9 , '$y = sin(x^2)$' , size = 16 ) # Annotate certain points with a value for x_an in np . linspace ( 0 , 2 * np . pi , 9 ): ax0 . annotate ( str ( round ( sin ( x_an ), 2 )),( x_an , sin ( x_an ))) # Add an arrow (x,y,xlength,ylength) ax0 . arrow ( np . pi - 0.5 , - 0.5 , 0.5 , 0.5 , head_width = 0.1 , length_includes_head = True ) Second, all basic elements like lines, polygons and the individual axis lines are customizable objects in their own right , attached to a specific Axes object. They can be retrieved, manipulated, created from scratch, and added to existing Axes objects. In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # For instance, fetch the X axis # XAxis objects have their own methods xax = ax1 . get_xaxis () print type ( xax ) # These methods allow you to fetch the even smaller building blocks # For instance, tick-lines are Line2D objects attached to the XAxis xaxt = xax . get_majorticklines () print len ( xaxt ) # Of which you can fetch AND change the properties # Here we change just one tickline into a cross print xaxt [ 6 ] . get_color () xaxt [ 6 ] . set_color ( 'g' ) xaxt [ 6 ] . set_marker ( 'x' ) xaxt [ 6 ] . set_markersize ( 10 ) In [ ]: # This uses the result of the exercise above # You have to copy-paste it into the same code-box, before the fig.show() # Another example: fetch the lines in the plot # Change the color, change the marker, and mark only every 100 points for one specific line ln = ax0 . get_lines () print ln ln [ 0 ] . set_color ( 'g' ) ln [ 0 ] . set_marker ( 'o' ) ln [ 0 ] . set_markerfacecolor ( 'b' ) ln [ 0 ] . set_markevery ( 100 ) # Finally, let's create a graphic element from scratch, that is not available as a top-level pyplot function # And then attach it to existing Axes # NOTE: we need to import something before we can create the ellipse like this. What should we import? ell = matplotlib . patches . Ellipse (( np . pi , 0 ), 1. , 1. , color = 'r' ) ax0 . add_artist ( ell ) ell . set_hatch ( '//' ) ell . set_edgecolor ( 'black' ) ell . set_facecolor (( 0.9 , 0.9 , 0.9 )) Exercise: Add regression lines ¶ Take the scatterplot from the first example, and manually add a regression line to both the R-G and the R-B comparisons. Try not to use the plot() function for the regression line, but manually create a Line2D object instead, and attach it to the Axes. Useful functions: np.polyfit(x,y,1) performs a linear regression, returning slope and constant plt.gca() retrieves the current Axes object matplotlib.lines.Line2D(x,y) can create a new Line2D object from x and y coordinate vectors In [ ]: # EXERCISE 10: Add regression lines import numpy as np from PIL import Image import matplotlib.pyplot as plt import matplotlib.lines as lines # Open image, convert to an array im = Image . open ( 'python.jpg' ) im = im . resize (( 400 , 300 )) arr = np . array ( im , dtype = 'float' ) # Split the RGB layers and flatten them R , G , B = np . dsplit ( arr , 3 ) R = R . flatten () G = G . flatten () B = B . flatten () # Do the plotting plt . figure ( figsize = ( 5 , 5 )) plt . plot ( R , B , marker = 'x' , linestyle = 'None' , color = ( 0 , 0 , 0.6 )) plt . plot ( R , G , marker = '.' , linestyle = 'None' , color = ( 0 , 0.35 , 0 )) # Tweak the plot plt . axis ([ 0 , 255 , 0 , 255 ]) plt . xlabel ( 'Red value' ) plt . ylabel ( 'Green/Blue value' ) # Fill in your code... # Show the result plt . show () Scipy ¶ Scipy is a large library of scientific functions, covering for instance numerical integration, linear algebra, Fourier transforms, and interpolation algorithms. If you can't find the equivalent of your favorite MATLAB function in any of the previous three packages, Scipy is a good place to look. A full list of all submodules can be found here . We will pick two useful modules from SciPy: stats and fftpack I will not give a lot of explanation here. I'll leave it up to you to navigate through the documentation, and find out how these functions work. Statistics ¶ In [ ]: import numpy as np import scipy.stats as stats # Generate random numbers between 0 and 1 data = np . random . rand ( 30 ) # Do a t-test with a H0 for the mean of 0.4 t , p = stats . ttest_1samp ( data , 0.4 ) print p # Generate another sample of random numbers, with mean 0.4 data2 = np . random . rand ( 30 ) - 0.1 # Do a t-test that these have the same mean t , p = stats . ttest_ind ( data , data2 ) print p In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Simulate the size of the F statistic when comparing three conditions # Given a constant n, and an increasing true effect size. true_effect = np . linspace ( 0 , 0.5 , 500 ) n = 100 Fres = [] # Draw random normally distributed samples for each condition, and do a one-way ANOVA for eff in true_effect : c1 = stats . norm . rvs ( 0 , 1 , size = n ) c2 = stats . norm . rvs ( eff , 1 , size = n ) c3 = stats . norm . rvs ( 2 * eff , 1 , size = n ) F , p = stats . f_oneway ( c1 , c2 , c3 ) Fres . append ( F ) # Create the plot plt . figure () plt . plot ( true_effect , Fres , 'r*-' ) plt . xlabel ( 'True Effect' ) plt . ylabel ( 'F' ) plt . show () In [ ]: import numpy as np import matplotlib.pyplot as plt import scipy.stats as stats # Compute the pdf and cdf of normal distributions, with increasing sd's # Then plot them in different colors # (of course, many other distributions are also available) x = np . linspace ( - 5 , 5 , 1000 ) sds = np . linspace ( 0.25 , 2.5 , 10 ) cols = np . linspace ( 0.15 , 0.85 , 10 ) # Create the figure fig = plt . figure ( figsize = ( 10 , 5 )) ax0 = fig . add_subplot ( 121 ) ax1 = fig . add_subplot ( 122 ) # Compute the densities, and plot them for i , sd in enumerate ( sds ): y1 = stats . norm . pdf ( x , 0 , sd ) y2 = stats . norm . cdf ( x , 0 , sd ) ax0 . plot ( x , y1 , color = cols [ i ] * np . array ([ 1 , 0 , 0 ])) ax1 . plot ( x , y2 , color = cols [ i ] * np . array ([ 0 , 1 , 0 ])) # Show the figure plt . show () The stats module of SciPy contains more statistical distributions and further tests such as a Kruskall-Wallis test, Wilcoxon test, a Chi-Square test, a test for normality, and so forth. A full listing of functions is found here . For serious statistical models however, you should be looking at the statsmodels package, or the rpy interfacing package, allowing R to be called from within Python. Fast Fourier Transform ¶ FFT is commonly used to process or analyze images (as well as sound). Numpy has a FFT package, numpy.fft , but SciPy has its own set of functions as well in scipy.fftpack . Both are very similar, you can use whichever package you like. I will assume that you are familiar with the basic underlying theory. That is, that any periodic function can be described as a sum of sine-waves of different frequencies, amplitudes and phases. A Fast Fourier Transform allows you to do this very quickly for equally spaced samples from the function, returning a finite set of sinusoidal components with n equal to the number of samples, ordered by frequency. Let's do this for a simple 1D function. In [ ]: import numpy as np import scipy.fftpack as fft # The original data: a step function data = np . zeros ( 200 , dtype = 'float' ) data [ 25 : 100 ] = 1 # Decompose into sinusoidal components # The result is a series of complex numbers as long as the data itself res = fft . fft ( data ) # FREQUENCY is implied by the ordering, but can be retrieved as well # It increases from 0 to the Nyquist frequency (0.5), followed by its reversed negative counterpart # Note: in case of real input data, the FFT results will be
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/podcasts/
TIME for Kids | Podcasts | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Podcasts Time Off Seeking Silver Linings January 22, 2025 The Search for the Silver Lining is a podcast based on stories by 10-year-old Isla Nemeth. It’s for listeners who love medieval legends about knights and wizards. The show’s main character is also named Isla. She’s fascinated by tales of… Audio Time Off What&#039;s That Sound? December 21, 2023 Think of a sound that’s important to you. Got it? Now imagine that sound— poof! —suddenly disappearing from the world. Sound Detectives, presented by actor and TV host LeVar Burton, is a podcast that turns ordinary sounds into epic mysteries.… Audio Time Off Creativity Unleashed September 20, 2023 Story Pirates is a theatrical company that invites students from around the world to submit original stories. The group travels the country bringing these stories to life onstage. Lee Overtree and some friends came up with the idea for Story… Audio Time Off Money-Minded February 15, 2023 What’s cryptocurrency? Why can’t kids have jobs? Why is our money green? These are just a few of the questions explored in Million Bazillion, a podcast that teaches kids about business and the economy. In each episode, hosts Bridget Bodnar… Audio Time Off Strange World September 14, 2022 You don’t have to go to outer space to find amazing things. Terrestrials, a new podcast by Radiolab for Kids, finds weirdness and mystery right here on Earth. The series explores wonders such as an octopus that escapes from… Audio Time Off Out of This World September 7, 2022 Do space aliens exist? This is the question at the heart of Aliens: Join the Scientists Searching Space for Extraterrestrial Life, by Joalda Morancy. This nonfiction book packs a wealth of information about the complexities of space and the possibility… Audio Time Off What&#039;s Your Story? January 11, 2022 Have a great idea for a story? The Story Seeds Podcast pairs kids who have original ideas with professional authors. Together, they brainstorm and develop the idea. Then the author writes a story, which is read on the podcast. Sandhya… Audio Time Off A Podcast of the Past September 16, 2020 Are you a history buff? Then you’ll love the podcast The Past and the Curious. It features surprising, funny, and inspiring stories about historical figures and events. Recent episodes have covered subjects from the nurse Florence Nightingale to the Harlem… Audio Arts Hidden Treasures January 18, 2019 Phoebe Wolinetz, 9, dreamed up a mystery about a mother-daughter detective duo. The details were vivid. The setting: a rare-plant shop in New York City. The culprit : a man with yellow eyes from a scary, faraway island. Story Pirates… Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/classOutputBuffer.html#a7edcffe9292e5aec52b66240970b660d
LLVM: OutputBuffer Class Reference LLVM &#160;22.0.0git Public Member Functions &#124; Public Attributes &#124; List of all members OutputBuffer Class Reference #include &quot; llvm/Demangle/Utility.h &quot; Public Member Functions &#160; OutputBuffer ( char *StartBuf, size_t Size ) &#160; OutputBuffer ( char *StartBuf, size_t *SizePtr) &#160; OutputBuffer ()=default &#160; OutputBuffer ( const OutputBuffer &amp;)=delete OutputBuffer &amp;&#160; operator= ( const OutputBuffer &amp;)=delete virtual&#160; ~OutputBuffer ()=default &#160; operator std::string_view () const virtual void&#160; printLeft ( const Node &amp; N ) &#160; Called by the demangler when printing the demangle tree. virtual void&#160; printRight ( const Node &amp; N ) virtual void&#160; notifyInsertion (size_t, size_t) &#160; Called when we write to this object anywhere other than the end. virtual void&#160; notifyDeletion (size_t, size_t) &#160; Called when we make the CurrentPosition of this object smaller. bool &#160; isInParensInTemplateArgs () const &#160; Returns true if we're currently between a '(' and ')' when printing template args. bool &#160; isInsideTemplateArgs () const &#160; Returns true if we're printing template args. void&#160; printOpen ( char Open='(') void&#160; printClose ( char Close=')') OutputBuffer &amp;&#160; operator+= (std::string_view R) OutputBuffer &amp;&#160; operator+= ( char C ) OutputBuffer &amp;&#160; prepend (std::string_view R) OutputBuffer &amp;&#160; operator&lt;&lt; (std::string_view R) OutputBuffer &amp;&#160; operator&lt;&lt; ( char C ) OutputBuffer &amp;&#160; operator&lt;&lt; (long long N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned long long N ) OutputBuffer &amp;&#160; operator&lt;&lt; (long N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned long N ) OutputBuffer &amp;&#160; operator&lt;&lt; (int N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned int N ) void&#160; insert (size_t Pos, const char *S, size_t N ) size_t&#160; getCurrentPosition () const void&#160; setCurrentPosition (size_t NewPos) char &#160; back () const bool &#160; empty () const char *&#160; getBuffer () char *&#160; getBufferEnd () size_t&#160; getBufferCapacity () const Public Attributes unsigned &#160; CurrentPackIndex = std::numeric_limits&lt; unsigned &gt;::max() &#160; If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. unsigned &#160; CurrentPackMax = std::numeric_limits&lt; unsigned &gt;::max() struct {&#160; &#160;&#160;&#160; unsigned &#160;&#160;&#160; ParenDepth = 0&#160; &#160; The depth of '(' and ')' inside the currently printed template arguments. More... &#160;&#160;&#160; bool &#160;&#160;&#160; InsideTemplate = false&#160; &#160; True if we're currently printing a template argument. More... }&#160; TemplateTracker Detailed Description Definition at line 34 of file Utility.h . Constructor &amp; Destructor Documentation &#9670;&#160; OutputBuffer() [1/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t Size &#160;) inline Definition at line 75 of file Utility.h . References Size . Referenced by operator+=() , operator+=() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator=() , OutputBuffer() , OutputBuffer() , and prepend() . &#9670;&#160; OutputBuffer() [2/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t * SizePtr &#160;) inline Definition at line 77 of file Utility.h . References OutputBuffer() . &#9670;&#160; OutputBuffer() [3/4] OutputBuffer::OutputBuffer ( ) default &#9670;&#160; OutputBuffer() [4/4] OutputBuffer::OutputBuffer ( const OutputBuffer &amp; ) delete References OutputBuffer() . &#9670;&#160; ~OutputBuffer() virtual OutputBuffer::~OutputBuffer ( ) virtual default Member Function Documentation &#9670;&#160; back() char OutputBuffer::back ( ) const inline Definition at line 213 of file Utility.h . References DEMANGLE_ASSERT . &#9670;&#160; empty() bool OutputBuffer::empty ( ) const inline Definition at line 218 of file Utility.h . &#9670;&#160; getBuffer() char * OutputBuffer::getBuffer ( ) inline Definition at line 220 of file Utility.h . Referenced by llvm::dlangDemangle() , removeNullBytes() , and llvm::ThinLTOCodeGenerator::writeGeneratedObject() . &#9670;&#160; getBufferCapacity() size_t OutputBuffer::getBufferCapacity ( ) const inline Definition at line 222 of file Utility.h . &#9670;&#160; getBufferEnd() char * OutputBuffer::getBufferEnd ( ) inline Definition at line 221 of file Utility.h . &#9670;&#160; getCurrentPosition() size_t OutputBuffer::getCurrentPosition ( ) const inline Definition at line 207 of file Utility.h . Referenced by decodePunycode() , llvm::dlangDemangle() , and removeNullBytes() . &#9670;&#160; insert() void OutputBuffer::insert ( size_t Pos , const char * S , size_t N &#160;) inline Definition at line 194 of file Utility.h . References DEMANGLE_ASSERT , N , and notifyInsertion() . Referenced by decodePunycode() . &#9670;&#160; isInParensInTemplateArgs() bool OutputBuffer::isInParensInTemplateArgs ( ) const inline Returns true if we're currently between a '(' and ')' when printing template args. Definition at line 118 of file Utility.h . References TemplateTracker . &#9670;&#160; isInsideTemplateArgs() bool OutputBuffer::isInsideTemplateArgs ( ) const inline Returns true if we're printing template args. Definition at line 123 of file Utility.h . References TemplateTracker . Referenced by printClose() , and printOpen() . &#9670;&#160; notifyDeletion() virtual void OutputBuffer::notifyDeletion ( size_t , size_t &#160;) inline virtual Called when we make the CurrentPosition of this object smaller. Definition at line 100 of file Utility.h . Referenced by setCurrentPosition() . &#9670;&#160; notifyInsertion() virtual void OutputBuffer::notifyInsertion ( size_t , size_t &#160;) inline virtual Called when we write to this object anywhere other than the end. Definition at line 97 of file Utility.h . Referenced by insert() , and prepend() . &#9670;&#160; operator std::string_view() OutputBuffer::operator std::string_view ( ) const inline Definition at line 86 of file Utility.h . &#9670;&#160; operator+=() [1/2] OutputBuffer &amp; OutputBuffer::operator+= ( char C ) inline Definition at line 145 of file Utility.h . References C() , and OutputBuffer() . &#9670;&#160; operator+=() [2/2] OutputBuffer &amp; OutputBuffer::operator+= ( std::string_view R ) inline Definition at line 136 of file Utility.h . References OutputBuffer() , and Size . &#9670;&#160; operator&lt;&lt;() [1/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( char C ) inline Definition at line 168 of file Utility.h . References C() , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [2/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( int N ) inline Definition at line 186 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [3/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( long long N ) inline Definition at line 170 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [4/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( long N ) inline Definition at line 178 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [5/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( std::string_view R ) inline Definition at line 166 of file Utility.h . References OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [6/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned int N ) inline Definition at line 190 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [7/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned long long N ) inline Definition at line 174 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [8/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned long N ) inline Definition at line 182 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator=() OutputBuffer &amp; OutputBuffer::operator= ( const OutputBuffer &amp; ) delete References OutputBuffer() . &#9670;&#160; prepend() OutputBuffer &amp; OutputBuffer::prepend ( std::string_view R ) inline Definition at line 151 of file Utility.h . References notifyInsertion() , OutputBuffer() , and Size . &#9670;&#160; printClose() void OutputBuffer::printClose ( char Close = ')' ) inline Definition at line 130 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . &#9670;&#160; printLeft() void OutputBuffer::printLeft ( const Node &amp; N ) inline virtual Called by the demangler when printing the demangle tree. By default calls into Node::print {Left|Right} but can be overriden by clients to track additional state when printing the demangled name. Definition at line 6202 of file ItaniumDemangle.h . References N . &#9670;&#160; printOpen() void OutputBuffer::printOpen ( char Open = '(' ) inline Definition at line 125 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . &#9670;&#160; printRight() void OutputBuffer::printRight ( const Node &amp; N ) inline virtual Definition at line 6204 of file ItaniumDemangle.h . References N . &#9670;&#160; setCurrentPosition() void OutputBuffer::setCurrentPosition ( size_t NewPos ) inline Definition at line 208 of file Utility.h . References notifyDeletion() . Referenced by llvm::dlangDemangle() , and removeNullBytes() . Member Data Documentation &#9670;&#160; CurrentPackIndex unsigned OutputBuffer::CurrentPackIndex = std::numeric_limits&lt; unsigned &gt;::max() If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. Definition at line 104 of file Utility.h . &#9670;&#160; CurrentPackMax unsigned OutputBuffer::CurrentPackMax = std::numeric_limits&lt; unsigned &gt;::max() Definition at line 105 of file Utility.h . &#9670;&#160; InsideTemplate bool OutputBuffer::InsideTemplate = false True if we're currently printing a template argument. Definition at line 113 of file Utility.h . &#9670;&#160; ParenDepth unsigned OutputBuffer::ParenDepth = 0 The depth of '(' and ')' inside the currently printed template arguments. Definition at line 110 of file Utility.h . &#9670;&#160; [struct] struct { ... } OutputBuffer::TemplateTracker Referenced by isInParensInTemplateArgs() , isInsideTemplateArgs() , printClose() , and printOpen() . The documentation for this class was generated from the following files: include/llvm/Demangle/ Utility.h include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by&#160; 1.14.0
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/the-human-body/
TIME for Kids | The Human Body | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit The Human Body Science Super Senses September 19, 2025 People have five senses. These are sight, hearing, touch, taste, and smell. Our senses help us understand the world. Look around. What do you see? Breathe in through your nose. What do you smell? These are your senses in action!… Audio Spanish Health Be Mindful September 19, 2025 Mindfulness is a way to use your senses. It focuses you. Pay attention to what is happening now. This can help you relax. You can do this anywhere. Try the 5-4-3-2-1 method below. Think of five things you are… Audio Science Baby Facts January 24, 2025 Human babies are different from adults. They change a lot as they grow. 1. Adults have about 200 bones. Babies have more than 270 bones. Some join over time. 2. A baby’s eyes are a different color. They may start… Audio Health Exercise Together November 1, 2024 To be healthy, you need to move your body. It can be fun to exercise with others. Here are some great ways to stay fit. Who will exercise with you? Join a team. Add action to your day and… Audio Spanish Health Sixty Minutes November 1, 2024 Kids should be active for at least 60 minutes each day. That is what doctors recommend. That adds up to one hour. Can you make a plan to meet this goal? Here are tips. Exercise Plan Look at your daily… Audio Health Summer Safety April 19, 2024 Summer is almost here. It’s a great time to go out and play. But take care. The weather gets hot, and bugs are biting. How can you stay healthy this summer? Follow these tips. Stay on trails. Going for… Audio Spanish Science Signs of Fear September 8, 2023 Feeling scared is natural. It happens to all of us. Fear is a reaction. It happens in your body. It can alert you to danger. Here are some of the ways you feel fear. Your heart beats faster. Feel… Audio Spanish Health Feeling the Heat April 7, 2023 Summer is just around the corner! In the summer, days are longer. Temperatures are higher. Your body reacts to heat and sun. Read about the effects below. Sweat When you get hot, you sweat. Sweat leaves your skin through your… Audio Spanish Health Summer Safety April 7, 2023 There are lots of fun things to do in the summer. But too much heat or sun is unhealthy. Stay safe in the summer. Be prepared for the weather. Here are some tips. 1. Drink lots of water, even if… Audio Science Five Senses in Fall September 22, 2022 The air has gotten cooler. Leaves are turning bright colors and falling to the ground. Fall is here! These are the ways we experience the season. Seeing (Above) Look up. Colorful leaves are above you. Now look at the… Audio Spanish Posts pagination 1 2 3 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/holidays/
TIME for Kids | Holidays | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Holidays Environment Every Action Counts March 19, 2025 Earth Day is April 22. It is celebrated around the world. People pitch in for the planet. Will you help? Everyone can take part. There are simple things you can do. Your actions could have a big impact. Turn off… Audio Spanish Environment Waste Less March 19, 2025 Waste is anything you get rid of. Too much waste is harmful to the environment. Here are three ways to create less of it. Recycle Recycling (above) turns used materials into something new. They are not wasted. Reuse Buy things… Audio World Let&#039;s Celebrate December 19, 2024 In many places, a new year begins on January 1. Here is how four countries celebrate. Do you have any New Year’s traditions? In Spain, they eat grapes. At midnight, the clock chimes 12 times. The Spanish eat one… Audio Spanish World Other New Years December 19, 2024 The New Year holiday is celebrated at different times. Here are three that are not celebrated on January 1. Lunar New Year Many Asian countries celebrate this holiday (above). Its date is based on the moon’s phases. It falls… Audio Time to Celebrate November 30, 2023 Winter is a time for bundling up. It is also a time for celebrating. There are many winter holidays. Each has its customs and traditions. Do you celebrate any of these holidays? Hanukkah Hanukkah is known as the Festival… Audio Spanish World New Year&#039;s Traditions November 30, 2023 The New Year’s holiday is celebrated around the world. See what people in different countries do to welcome the new year. Splashing Around In the Netherlands and Canada, it is a tradition to take a dip on New Year’s Day… Audio History Halloween History September 7, 2023 Do you have a Halloween tradition? Maybe you carve faces on pumpkins. Or watch scary movies with your family. Halloween customs started in Europe. They go back about 2,000 years. The holiday was a harvest celebration. Fall is harvest… Audio United States Summer Celebrations April 1, 2023 In the United States, four national holidays happen during the summer season. The first is in May. The last is in September. How does your family observe these holidays? Memorial Day This holiday honors U.S. military members who died… Audio Spanish History The History of Fireworks April 1, 2023 Did you know that fireworks are thousands of years old? The earliest fireworks were made about 2,000 years ago. That was in China. People roasted bamboo. Bamboo stalks are hollow. Air inside exploded! Later, people filled the stalks with… Audio World How to Make a Piñata December 14, 2022 A piñata can be made with things you have at home. Here are the steps. Draw a shape on cardboard. You need two sides. You also need borders. This is a cactus shape. Cut the shapes with scissors. Ask an… Audio Posts pagination 1 2 3 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://llvm.org/doxygen/classOutputBuffer.html#a10fd90958f5e348d0119949d9bbc1b6e
LLVM: OutputBuffer Class Reference LLVM &#160;22.0.0git Public Member Functions &#124; Public Attributes &#124; List of all members OutputBuffer Class Reference #include &quot; llvm/Demangle/Utility.h &quot; Public Member Functions &#160; OutputBuffer ( char *StartBuf, size_t Size ) &#160; OutputBuffer ( char *StartBuf, size_t *SizePtr) &#160; OutputBuffer ()=default &#160; OutputBuffer ( const OutputBuffer &amp;)=delete OutputBuffer &amp;&#160; operator= ( const OutputBuffer &amp;)=delete virtual&#160; ~OutputBuffer ()=default &#160; operator std::string_view () const virtual void&#160; printLeft ( const Node &amp; N ) &#160; Called by the demangler when printing the demangle tree. virtual void&#160; printRight ( const Node &amp; N ) virtual void&#160; notifyInsertion (size_t, size_t) &#160; Called when we write to this object anywhere other than the end. virtual void&#160; notifyDeletion (size_t, size_t) &#160; Called when we make the CurrentPosition of this object smaller. bool &#160; isInParensInTemplateArgs () const &#160; Returns true if we're currently between a '(' and ')' when printing template args. bool &#160; isInsideTemplateArgs () const &#160; Returns true if we're printing template args. void&#160; printOpen ( char Open='(') void&#160; printClose ( char Close=')') OutputBuffer &amp;&#160; operator+= (std::string_view R) OutputBuffer &amp;&#160; operator+= ( char C ) OutputBuffer &amp;&#160; prepend (std::string_view R) OutputBuffer &amp;&#160; operator&lt;&lt; (std::string_view R) OutputBuffer &amp;&#160; operator&lt;&lt; ( char C ) OutputBuffer &amp;&#160; operator&lt;&lt; (long long N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned long long N ) OutputBuffer &amp;&#160; operator&lt;&lt; (long N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned long N ) OutputBuffer &amp;&#160; operator&lt;&lt; (int N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned int N ) void&#160; insert (size_t Pos, const char *S, size_t N ) size_t&#160; getCurrentPosition () const void&#160; setCurrentPosition (size_t NewPos) char &#160; back () const bool &#160; empty () const char *&#160; getBuffer () char *&#160; getBufferEnd () size_t&#160; getBufferCapacity () const Public Attributes unsigned &#160; CurrentPackIndex = std::numeric_limits&lt; unsigned &gt;::max() &#160; If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. unsigned &#160; CurrentPackMax = std::numeric_limits&lt; unsigned &gt;::max() struct {&#160; &#160;&#160;&#160; unsigned &#160;&#160;&#160; ParenDepth = 0&#160; &#160; The depth of '(' and ')' inside the currently printed template arguments. More... &#160;&#160;&#160; bool &#160;&#160;&#160; InsideTemplate = false&#160; &#160; True if we're currently printing a template argument. More... }&#160; TemplateTracker Detailed Description Definition at line 34 of file Utility.h . Constructor &amp; Destructor Documentation &#9670;&#160; OutputBuffer() [1/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t Size &#160;) inline Definition at line 75 of file Utility.h . References Size . Referenced by operator+=() , operator+=() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator=() , OutputBuffer() , OutputBuffer() , and prepend() . &#9670;&#160; OutputBuffer() [2/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t * SizePtr &#160;) inline Definition at line 77 of file Utility.h . References OutputBuffer() . &#9670;&#160; OutputBuffer() [3/4] OutputBuffer::OutputBuffer ( ) default &#9670;&#160; OutputBuffer() [4/4] OutputBuffer::OutputBuffer ( const OutputBuffer &amp; ) delete References OutputBuffer() . &#9670;&#160; ~OutputBuffer() virtual OutputBuffer::~OutputBuffer ( ) virtual default Member Function Documentation &#9670;&#160; back() char OutputBuffer::back ( ) const inline Definition at line 213 of file Utility.h . References DEMANGLE_ASSERT . &#9670;&#160; empty() bool OutputBuffer::empty ( ) const inline Definition at line 218 of file Utility.h . &#9670;&#160; getBuffer() char * OutputBuffer::getBuffer ( ) inline Definition at line 220 of file Utility.h . Referenced by llvm::dlangDemangle() , removeNullBytes() , and llvm::ThinLTOCodeGenerator::writeGeneratedObject() . &#9670;&#160; getBufferCapacity() size_t OutputBuffer::getBufferCapacity ( ) const inline Definition at line 222 of file Utility.h . &#9670;&#160; getBufferEnd() char * OutputBuffer::getBufferEnd ( ) inline Definition at line 221 of file Utility.h . &#9670;&#160; getCurrentPosition() size_t OutputBuffer::getCurrentPosition ( ) const inline Definition at line 207 of file Utility.h . Referenced by decodePunycode() , llvm::dlangDemangle() , and removeNullBytes() . &#9670;&#160; insert() void OutputBuffer::insert ( size_t Pos , const char * S , size_t N &#160;) inline Definition at line 194 of file Utility.h . References DEMANGLE_ASSERT , N , and notifyInsertion() . Referenced by decodePunycode() . &#9670;&#160; isInParensInTemplateArgs() bool OutputBuffer::isInParensInTemplateArgs ( ) const inline Returns true if we're currently between a '(' and ')' when printing template args. Definition at line 118 of file Utility.h . References TemplateTracker . &#9670;&#160; isInsideTemplateArgs() bool OutputBuffer::isInsideTemplateArgs ( ) const inline Returns true if we're printing template args. Definition at line 123 of file Utility.h . References TemplateTracker . Referenced by printClose() , and printOpen() . &#9670;&#160; notifyDeletion() virtual void OutputBuffer::notifyDeletion ( size_t , size_t &#160;) inline virtual Called when we make the CurrentPosition of this object smaller. Definition at line 100 of file Utility.h . Referenced by setCurrentPosition() . &#9670;&#160; notifyInsertion() virtual void OutputBuffer::notifyInsertion ( size_t , size_t &#160;) inline virtual Called when we write to this object anywhere other than the end. Definition at line 97 of file Utility.h . Referenced by insert() , and prepend() . &#9670;&#160; operator std::string_view() OutputBuffer::operator std::string_view ( ) const inline Definition at line 86 of file Utility.h . &#9670;&#160; operator+=() [1/2] OutputBuffer &amp; OutputBuffer::operator+= ( char C ) inline Definition at line 145 of file Utility.h . References C() , and OutputBuffer() . &#9670;&#160; operator+=() [2/2] OutputBuffer &amp; OutputBuffer::operator+= ( std::string_view R ) inline Definition at line 136 of file Utility.h . References OutputBuffer() , and Size . &#9670;&#160; operator&lt;&lt;() [1/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( char C ) inline Definition at line 168 of file Utility.h . References C() , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [2/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( int N ) inline Definition at line 186 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [3/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( long long N ) inline Definition at line 170 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [4/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( long N ) inline Definition at line 178 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [5/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( std::string_view R ) inline Definition at line 166 of file Utility.h . References OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [6/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned int N ) inline Definition at line 190 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [7/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned long long N ) inline Definition at line 174 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [8/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned long N ) inline Definition at line 182 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator=() OutputBuffer &amp; OutputBuffer::operator= ( const OutputBuffer &amp; ) delete References OutputBuffer() . &#9670;&#160; prepend() OutputBuffer &amp; OutputBuffer::prepend ( std::string_view R ) inline Definition at line 151 of file Utility.h . References notifyInsertion() , OutputBuffer() , and Size . &#9670;&#160; printClose() void OutputBuffer::printClose ( char Close = ')' ) inline Definition at line 130 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . &#9670;&#160; printLeft() void OutputBuffer::printLeft ( const Node &amp; N ) inline virtual Called by the demangler when printing the demangle tree. By default calls into Node::print {Left|Right} but can be overriden by clients to track additional state when printing the demangled name. Definition at line 6202 of file ItaniumDemangle.h . References N . &#9670;&#160; printOpen() void OutputBuffer::printOpen ( char Open = '(' ) inline Definition at line 125 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . &#9670;&#160; printRight() void OutputBuffer::printRight ( const Node &amp; N ) inline virtual Definition at line 6204 of file ItaniumDemangle.h . References N . &#9670;&#160; setCurrentPosition() void OutputBuffer::setCurrentPosition ( size_t NewPos ) inline Definition at line 208 of file Utility.h . References notifyDeletion() . Referenced by llvm::dlangDemangle() , and removeNullBytes() . Member Data Documentation &#9670;&#160; CurrentPackIndex unsigned OutputBuffer::CurrentPackIndex = std::numeric_limits&lt; unsigned &gt;::max() If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. Definition at line 104 of file Utility.h . &#9670;&#160; CurrentPackMax unsigned OutputBuffer::CurrentPackMax = std::numeric_limits&lt; unsigned &gt;::max() Definition at line 105 of file Utility.h . &#9670;&#160; InsideTemplate bool OutputBuffer::InsideTemplate = false True if we're currently printing a template argument. Definition at line 113 of file Utility.h . &#9670;&#160; ParenDepth unsigned OutputBuffer::ParenDepth = 0 The depth of '(' and ')' inside the currently printed template arguments. Definition at line 110 of file Utility.h . &#9670;&#160; [struct] struct { ... } OutputBuffer::TemplateTracker Referenced by isInParensInTemplateArgs() , isInsideTemplateArgs() , printClose() , and printOpen() . The documentation for this class was generated from the following files: include/llvm/Demangle/ Utility.h include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by&#160; 1.14.0
2026-01-13T09:30:39
https://logtide.dev/docs/architecture#ai:lucide:code
Architecture | LogTide Docs Docs GitHub Login Get Started Menu Documentation Documentation Getting Started Quick Start Installation No-SDK Setup API Reference Overview Authentication Log Ingestion Log Query Alerts SDKs Overview Node.js Python Go PHP Kotlin C# / .NET Integrations Syslog OpenTelemetry Authentication Overview OpenID Connect LDAP Initial Admin Setup Auth-Free Mode Admin Settings User Management Troubleshooting Dev Testing Migration Overview From Datadog From Splunk From ELK Stack From SigNoz From Grafana Loki Guides Architecture Log Retention Deployment Contributing View on GitHub Documentation Getting Started Quick Start Installation No-SDK Setup API Reference Overview Authentication Log Ingestion Log Query Alerts SDKs Overview Node.js Python Go PHP Kotlin C# / .NET Integrations Syslog OpenTelemetry Authentication Overview OpenID Connect LDAP Initial Admin Setup Auth-Free Mode Admin Settings User Management Troubleshooting Dev Testing Migration Overview From Datadog From Splunk From ELK Stack From SigNoz From Grafana Loki Guides Architecture Log Retention Deployment Contributing View on GitHub Docs Architecture Architecture Understanding LogTide's system architecture and design decisions. System Overview LogTide follows a modern microservices architecture with clear separation of concerns: Data Hierarchy User → Organizations (1:N) → Projects (1:N) → API Keys → Logs Organizations - Top-level isolation for companies/teams. Each user can belong to multiple organizations. Projects - Logical grouping within organizations (e.g., "production", "staging"). Complete data isolation. API Keys - Project-scoped keys for secure log ingestion and query. Prefixed with lp_ . Logs - Time-series data stored in TimescaleDB with automatic compression and retention policies. Technology Stack Backend Runtime: Node.js 20+ Framework: Fastify Language: TypeScript 5 ORM: Kysely (type-safe SQL) Queue: BullMQ + Redis Validation: Zod schemas Frontend Framework: SvelteKit 5 (Runes) Language: TypeScript 5 Styling: TailwindCSS Components: shadcn-svelte Charts: ECharts State: Svelte stores Database RDBMS: PostgreSQL 16 Extension: TimescaleDB Time-series: Hypertables Compression: Automatic Retention: Configurable policies Infrastructure Cache: Redis 7 Proxy: Nginx Container: Docker Orchestration: Docker Compose Monorepo: pnpm workspaces Core Components Backend Server (Fastify) High-performance API server handling log ingestion, query, and management endpoints. Modular architecture with feature-based modules: auth/ - Authentication and user management ingestion/ - Log ingestion with batch support query/ - Log search and filtering alerts/ - Alert rule management dashboard/ - Statistics and aggregations Worker Process (BullMQ) Background job processor for alert evaluation, notifications, and data retention. Runs independently from the main API server. Frontend Dashboard (SvelteKit) Modern, reactive UI with real-time log streaming, search, alerts management, and organization administration. Server-side rendering for optimal performance. TimescaleDB PostgreSQL extension optimized for time-series data. Automatic partitioning, compression, and retention policies for efficient long-term log storage. Data Flow Log Ingestion Flow Client sends logs via POST /api/v1/ingest with API key Backend validates API key and extracts project ID Logs are validated against Zod schema Batch insert into TimescaleDB hypertable Alert evaluator job is triggered (BullMQ) Logs are broadcast to active SSE streams Alert Processing Flow Worker evaluates all enabled alert rules (every minute) For each rule, query logs matching conditions If threshold exceeded, create alert instance Send notifications (email/webhook) via configured channels Update alert status and last triggered timestamp Log Retention LogTide supports customizable log retention policies per organization, allowing administrators to control how long logs are stored before automatic deletion. Retention Configuration Range: 1 to 365 days Default: 90 days Scope: Organization-level (applies to all projects within the organization) Cleanup: Daily at 2:00 AM (server time) Admin Configuration Only system administrators can modify retention settings. This is done through the Admin Panel under Organization Details: Navigate to Admin Panel → Organizations Click on the organization you want to configure Find the "Log Retention Policy" card Enter the desired retention period (1-365 days) Click "Save" to apply the changes User Visibility Regular users can view their organization's retention policy in read-only mode: Navigate to Organization Settings View the "Log Retention Policy" card showing the current retention period Contact your administrator if you need to change the retention policy Cleanup Process The retention cleanup runs as a background worker job: Schedule: Daily at 2:00 AM server time Startup: Also runs 2 minutes after worker starts Process: Deletes logs older than the retention period for each organization Logging: All cleanup operations are logged internally for audit purposes Important Notes Log deletion is permanent and cannot be undone Only the logs table is affected by retention policies Other data (spans, alert history, etc.) follows separate retention rules TimescaleDB's global 90-day policy may still apply as a safety net Key Design Decisions Why TimescaleDB? Native time-series optimizations, automatic compression, built-in retention policies, and PostgreSQL compatibility make it ideal for log storage with high ingestion rates. Why Fastify? Excellent performance, native TypeScript support, schema validation, plugin ecosystem, and lower overhead compared to Express make it perfect for high-throughput log ingestion. Why SvelteKit 5? Modern reactivity with Runes, excellent performance, built-in SSR, file-based routing, and minimal bundle size provide the best developer and user experience. Edit this page on GitHub ON THIS PAGE System Overview Technology Stack Core Components Data Flow Log Retention Key Design Decisions Privacy-first log management. Open source, GDPR compliant, built in Europe. Product Documentation Getting Started SDKs Open Source GitHub AGPLv3 License Report Issue &copy; 2026 LogTide. Built with care in Europe. All systems operational
2026-01-13T09:30:39
https://www.frontendinterviewhandbook.com/pl/companies/uber-front-end-interview-questions
Uber Front End Interview Questions | The Official Front End Interview Handbook 2025 Przejdź do głównej zawartości We are now part of GreatFrontEnd , a front end interview preparation platform created by ex-Meta and Google Engineers. Get 20% off today ! Front End Interview Handbook Start reading Practice Coding Questions System Design Quiz Questions System design Blog Polski English 简体中文 Español 日本語 한국어 Polski Português Русский Tagalog বাংলা Szukaj Introduction Coding interview JavaScript coding User interface coding Algorithms coding Quiz/trivia interview System design interview Overview User interface components Applications Behavorial interviews Resume preparation Interview questions 🔥 Amazon interview questions Google interview questions Microsoft interview questions Meta interview questions Airbnb interview questions ByteDance/TikTok interview questions Atlassian interview questions Uber interview questions Apple interview questions Canva interview questions Dropbox interview questions LinkedIn interview questions Lyft interview questions Twitter interview questions Shopify interview questions Pinterest interview questions Reddit interview questions Adobe interview questions Palantir interview questions Salesforce interview questions Oracle interview questions Interview questions 🔥 Uber interview questions Na tej stronie Uber Front End Interview Questions Latest version on GreatFrontEnd Find the latest version of this page on GreatFrontEnd&#x27;s Uber Front End Interview Guide . Not much is known about Uber&#x27;s front end interview process. JavaScript coding questions ​ Implement a rate limiter attribute/decoration/annotation on top of an API endpoint. Caps to N requests per minute with a rolling window. Source A and Source B User interface coding questions ​ Create a button that when clicked, adds a progress bar onto the page. The progress bar would then fill up in a given amount of time (think 3 to 5 seconds). If you get past the first part, you will be asked to do throttling how many progress bars can be running at once. For example, if the limit is 3 progress bars, and the user clicks on the button 4 times, the fourth progress bar only starts after the very first one finishes. Source Practice question (Paid) Overlapping circles app. Source Insider tips from the GreatFrontEnd community ​ These tips were shared by GreatFrontEnd users who have completed interviews with Uber. 3rd Jun 2025 : I was asked a reactjs based question although I majorly prepared for dsa and JS type questions based on the questions asked previously. What I&#x27;ve learned and observed about Uber&#x27;s FE process is that they can ask pretty random questions. I guess recruiters are not in sync with the interviewers. Or it feels like it is more up to the interviewers on what they ask. I watched some YT videos of people on their uber interview experience and literally to each one of them dsa was asked. 7th May 2025 : Just gave BPS round of Uber and gotta say, Uber has quality problems! I thought it would be DSA as the recruiter had mentioned. But I guess you can’t trust recruiters nowadays. Question was “create a utility in JS to send data in batches with a timeout. So, as soon as a batch size is reached, send the data right away and start the timeout. If timeout happens before batch is filled, send the batch as it is and start the timer again.” Also, I asked the interviewer for what to prepare for DSA round and he said array, trees, graphs, traversals, but he also mentioned that Uber is trying to get away from DSA for frontend roles and keep it frontend focused and slowly they are doing it. 5th May 2025 : Ik someone who gave Uber&#x27;s SDE II web interview, in short prepare everything, there were 5 rounds - DSA - leetcode styled and js based Web Fundamentals - HTML, CSS, JS, APIs, Internet Frontend System Design Culture Fit Hiring Manager 19th Jan 2025 : For Uber FE SDE2 check mapAsyncLimit question. Since you&#x27;re interviewing for their Bengaluru office, prepare behavioral well and do google calender system design 6th Dec 2024 : I&#x27;ve my uber onsite coming up. First 2 rounds are leetcode styled coding and System design. abt me: 7 YOE For more insider tips, visit GreatFrontEnd ! Edytuj tę stronę Ostatnia aktualizacja dnia 30 lis 2025 przez Danielle Ford Poprzednia strona Atlassian interview questions Następna strona Apple interview questions Table of Contents JavaScript coding questions User interface coding questions Insider tips from the GreatFrontEnd community General Get started Trivia questions Company questions Blog Coding Algorithms JavaScript utility functions User interfaces System design System design overview User interface components Applications More GreatFrontEnd GitHub X Discord Contact us Tech Interview Handbook Copyright © 2025 Yangshun Tay and GreatFrontEnd
2026-01-13T09:30:39
https://shanghai.dacheng.com/News_2/679.html
大成助力汉星管理收购千一智能控制权-北京大成(上海)律师事务所 --> 首页 关于我们 关于我们 获奖案例 新闻资讯 大成资讯 大成业绩 大成活动 大成荣誉 业务领域 专业分类 行业覆盖 寻找律师 大成研究 实践指南 政策解析 专业通讯 新法速递 案例评析 时事评论 党群文化 党建 群团 人物 工作机会 EN 新闻资讯 新闻资讯 大成资讯 大成业绩 大成活动 大成荣誉 大成助力汉星管理收购千一智能控制权 发布日期:2025-10-30 近日,安徽汉星能源管理有限公司(以下简称“汉星管理”)通过特定事项协议转让与表决权委托相结合的方式合计取得新三板挂牌公司安徽千一智能设备股份有限公司(以下简称“千一智能”)100%股份的表决权,完成对千一智能的收购。 汉星管理主要通过子公司安徽汉星能源有限公司开展电化学储能全产业链产品研发和系统建设、充电桩研发生产与销售、充电场站建设与运营、换电重卡场站建设、储能核心控制系统研发与销售以及综合能源大数据等业务。本次收购有利于汉星管理与千一智能双方客户资源共享及业务技术整合,实现协同效应。 大成上海办公室合伙人王恩顺、方茂宏,合伙人(备案中)许青、杨礼中及律师刘厚阳、李明慧等组成的法律顾问团队为汉星管理本次收购提供了包括法律尽职调查、交易方案论证、交易协议起草谈判、法律意见书出具等在内全程专业高效的法律服务。 上一篇: 大成律师担任预重整临时管理人、破产管理人的威某汽车科技集团有限公司等四家关联企业申请破产重整案入选最高院入库参考案例 返回列表 下一篇: 大成上海律师助力浦东天使母基金参与设立临床转化种子基金 相关律师 +86 21-5878-5888 关于我们 新闻资讯 业务领域 寻找律师 大成研究 诚挚感谢为大成上海官网提供照片的同仁们! Copyright © 2024大成版权所有 保留所有权利 All Rights Reserved 2024 京ICP备18048582号-2 京公网安备:11010502053550号 Beijing Dacheng Law Offices, LLP (“大成”) is an independent law firm, and not a member or affiliate of Dentons. 大成 is a partnership law firm organized under the laws of the People's Republic of China, and is Dentons' Preferred Law Firm in China, with offices in more than 50 locations throughout China. Dentons Group (a Swiss Verein) (“Dentons”) is a separate international law firm with members and affiliates in more than 160 locations around the world, including Hong Kong SAR, China. For more information, please see dacheng.com/legal-notices or dentons.com/legal-notices.
2026-01-13T09:30:39
https://www.timeforkids.com/k1/clever-colors-k1/
TIME for Kids | Clever Colors | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Animals Nature Clever Colors December 19, 2025 TIME for Kids Print Often, an animal’s coloring helps it survive. Some colors say, “Don’t mess with me.” Others help animals trap food. Here are some examples. Take a look. Warning Sign This frog (above) is bright orange. Its color sends a warning to other animals: “Do not eat me. I am poisonous.” Hidden Danger This yellow spider blends in with a flower. Bees do not see the spider. They land on the flower. Then the spider eats them. Open Wide Check out this lizard’s cool blue tongue. The color confuses other animals. It scares them away. More from Animals Nature Time to Eat! December 22, 2025 Animals have favorite foods. Rabbits eat plants. They eat grass and leaves. Foxes eat other animals. Some animals eat all kinds of things. Different animals have different diets. Learn about some of them here. Meat Eaters Lions are carnivores.… Audio Spanish Nature Who Eats What? December 22, 2025 Some animals eat only meat. Others eat only plants. Some eat both. How can we tell who eats what? We can use this chart. It is called a Venn diagram. The animals on the left are carnivores. Those on the… Audio Nature Animals Talk Too December 22, 2025 How do you share your thoughts? You might use words. You might use your hands. Animals do not speak the way we do. But they have lots of ways to communicate. Sounding Off This monkey has a loud voice.… Audio Spanish Nature How Animals Vote December 22, 2025 People can vote. Did you know that groups of animals can vote too? They vote on where to look for food. Or they vote on where to live. Meerkats Meerkats (above) search for food. They can vote to search faster.… Audio Share a Link Click the icon above to copy the url link to your clipboard. Paste the link into the location in which you share assignments with students. Examples might include, but are not limited to Canvas, Schoology and Edmodo. Google Classroom Click on the icon above to share the article with a class in your Google Classroom. Choose an action. Options might include creating an assignment or asking a question. Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/time-to-eat-k1/
TIME for Kids | Time to Eat! | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Animals Nature Time to Eat! December 22, 2025 TIME for Kids En Espanol Print FRANCESCA25—GETTY IMAGES Animals have favorite foods. Rabbits eat plants. They eat grass and leaves. Foxes eat other animals. Some animals eat all kinds of things. Different animals have different diets. Learn about some of them here. Meat Eaters JOHN DOWNER—GETTY IMAGES Lions are carnivores. That means they eat meat. Lions hunt other animals. This one is looking for its next meal. Grass-Fed JOANNA MCCARTHY—GETTY IMAGES Herbivores eat only plants. These bison are eating grass. Other herbivores eat leaves. Fruits or nuts are also part of a herbivore’s diet. Lots of Options PAUL SOUDERS—GETTY IMAGES Omnivores eat both plants and animals. This bear is an omnivore. It is eating a fish. Later, it might eat berries. Did You Know? KANAWA STUDIO—GETTY IMAGES Herbivores are animals that eat only plants. Some people also choose to eat only plants. We call them vegetarians. Vegetarians do not eat meat. More from Animals Nature Who Eats What? December 22, 2025 Some animals eat only meat. Others eat only plants. Some eat both. How can we tell who eats what? We can use this chart. It is called a Venn diagram. The animals on the left are carnivores. Those on the… Audio Nature Animals Talk Too December 22, 2025 How do you share your thoughts? You might use words. You might use your hands. Animals do not speak the way we do. But they have lots of ways to communicate. Sounding Off This monkey has a loud voice.… Audio Spanish Nature How Animals Vote December 22, 2025 People can vote. Did you know that groups of animals can vote too? They vote on where to look for food. Or they vote on where to live. Meerkats Meerkats (above) search for food. They can vote to search faster.… Audio Nature Animal Defenses December 19, 2025 Animals protect themselves against predators. Predators are other animals that want to eat them. How do they protect themselves? They use their defenses. Find out more. What’s That Smell? Skunks have a stinky secret. They spray a bad odor.… Audio Spanish Share a Link Click the icon above to copy the url link to your clipboard. Paste the link into the location in which you share assignments with students. Examples might include, but are not limited to Canvas, Schoology and Edmodo. Google Classroom Click on the icon above to share the article with a class in your Google Classroom. Choose an action. Options might include creating an assignment or asking a question. Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/election-2024/
TIME for Kids | Election 2024 | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Election 2024 Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.charterworks.com/newsletter/
Charter Email Newsletter Latest Topics AI DEI Flexible Work Management Societal Issues Resources Briefing Work Tech Research Playbooks Case Studies Toolkits, Scripts, and other Resources Solutions Charter Pro Charter Forum Charter Pro for Teams Advisory + Strategic Services Events Upcoming and Past The AI Download Charter Cortados Leading With AI Masterclass Skills Accelerators Strategy Briefings Webinars Workplace Summit About Sign In Charter Briefing The handbook for the future of work, delivered to your inbox. Continue reading by subscribing to Charter’s newsletter about the future of work. If you’re already a Charter newsletter subscriber, this will only verify your email address. * indicates required First Name Last Name Title Level Select Individual Contributor Manager Director VP CXO Founder or Owner Other No Thanks Job Function Select People and HR Executive Leadership Marketing &amp; Comms Legal IT Sales Tech &amp; Product Other Learning &amp; Development Journalism Operations Finance Company Size Select 1-10 employees 11-50 employees 51-200 employees 201-500 employees 501-1,000 employees 1,001-5,000 employees 5,001-10,000 employees 10,001+ employees Self-employed Are you responsible for making strategic decisions about people at your organization? Yes No No Thanks Delivered 2 days a week Sign up for the newsletter that provides news and analysis of workplace trends and tips for managing yourself and your team. Here's what you'll get: Vital issues of the week Interviews with top researchers and practitioners Columns on careers and the modern workplace Briefings on the best in business and management books Insights Books Interviews Charter on TIME Research Connect Events Topics Artificial Intelligence Hybrid Work DEI Leadership Charter Pro Become a Member Support Sign In Search Contact Partnerships General Inquiries Company About Careers Press Newsletters Charter Briefing Charter Works Inc. © 2025 Privacy Terms of Service
2026-01-13T09:30:39
https://llvm.org/doxygen/classOutputBuffer.html#a21d8b544247c0f9bebfc3d3223e9ac0c
LLVM: OutputBuffer Class Reference LLVM &#160;22.0.0git Public Member Functions &#124; Public Attributes &#124; List of all members OutputBuffer Class Reference #include &quot; llvm/Demangle/Utility.h &quot; Public Member Functions &#160; OutputBuffer ( char *StartBuf, size_t Size ) &#160; OutputBuffer ( char *StartBuf, size_t *SizePtr) &#160; OutputBuffer ()=default &#160; OutputBuffer ( const OutputBuffer &amp;)=delete OutputBuffer &amp;&#160; operator= ( const OutputBuffer &amp;)=delete virtual&#160; ~OutputBuffer ()=default &#160; operator std::string_view () const virtual void&#160; printLeft ( const Node &amp; N ) &#160; Called by the demangler when printing the demangle tree. virtual void&#160; printRight ( const Node &amp; N ) virtual void&#160; notifyInsertion (size_t, size_t) &#160; Called when we write to this object anywhere other than the end. virtual void&#160; notifyDeletion (size_t, size_t) &#160; Called when we make the CurrentPosition of this object smaller. bool &#160; isInParensInTemplateArgs () const &#160; Returns true if we're currently between a '(' and ')' when printing template args. bool &#160; isInsideTemplateArgs () const &#160; Returns true if we're printing template args. void&#160; printOpen ( char Open='(') void&#160; printClose ( char Close=')') OutputBuffer &amp;&#160; operator+= (std::string_view R) OutputBuffer &amp;&#160; operator+= ( char C ) OutputBuffer &amp;&#160; prepend (std::string_view R) OutputBuffer &amp;&#160; operator&lt;&lt; (std::string_view R) OutputBuffer &amp;&#160; operator&lt;&lt; ( char C ) OutputBuffer &amp;&#160; operator&lt;&lt; (long long N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned long long N ) OutputBuffer &amp;&#160; operator&lt;&lt; (long N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned long N ) OutputBuffer &amp;&#160; operator&lt;&lt; (int N ) OutputBuffer &amp;&#160; operator&lt;&lt; ( unsigned int N ) void&#160; insert (size_t Pos, const char *S, size_t N ) size_t&#160; getCurrentPosition () const void&#160; setCurrentPosition (size_t NewPos) char &#160; back () const bool &#160; empty () const char *&#160; getBuffer () char *&#160; getBufferEnd () size_t&#160; getBufferCapacity () const Public Attributes unsigned &#160; CurrentPackIndex = std::numeric_limits&lt; unsigned &gt;::max() &#160; If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. unsigned &#160; CurrentPackMax = std::numeric_limits&lt; unsigned &gt;::max() struct {&#160; &#160;&#160;&#160; unsigned &#160;&#160;&#160; ParenDepth = 0&#160; &#160; The depth of '(' and ')' inside the currently printed template arguments. More... &#160;&#160;&#160; bool &#160;&#160;&#160; InsideTemplate = false&#160; &#160; True if we're currently printing a template argument. More... }&#160; TemplateTracker Detailed Description Definition at line 34 of file Utility.h . Constructor &amp; Destructor Documentation &#9670;&#160; OutputBuffer() [1/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t Size &#160;) inline Definition at line 75 of file Utility.h . References Size . Referenced by operator+=() , operator+=() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator&lt;&lt;() , operator=() , OutputBuffer() , OutputBuffer() , and prepend() . &#9670;&#160; OutputBuffer() [2/4] OutputBuffer::OutputBuffer ( char * StartBuf , size_t * SizePtr &#160;) inline Definition at line 77 of file Utility.h . References OutputBuffer() . &#9670;&#160; OutputBuffer() [3/4] OutputBuffer::OutputBuffer ( ) default &#9670;&#160; OutputBuffer() [4/4] OutputBuffer::OutputBuffer ( const OutputBuffer &amp; ) delete References OutputBuffer() . &#9670;&#160; ~OutputBuffer() virtual OutputBuffer::~OutputBuffer ( ) virtual default Member Function Documentation &#9670;&#160; back() char OutputBuffer::back ( ) const inline Definition at line 213 of file Utility.h . References DEMANGLE_ASSERT . &#9670;&#160; empty() bool OutputBuffer::empty ( ) const inline Definition at line 218 of file Utility.h . &#9670;&#160; getBuffer() char * OutputBuffer::getBuffer ( ) inline Definition at line 220 of file Utility.h . Referenced by llvm::dlangDemangle() , removeNullBytes() , and llvm::ThinLTOCodeGenerator::writeGeneratedObject() . &#9670;&#160; getBufferCapacity() size_t OutputBuffer::getBufferCapacity ( ) const inline Definition at line 222 of file Utility.h . &#9670;&#160; getBufferEnd() char * OutputBuffer::getBufferEnd ( ) inline Definition at line 221 of file Utility.h . &#9670;&#160; getCurrentPosition() size_t OutputBuffer::getCurrentPosition ( ) const inline Definition at line 207 of file Utility.h . Referenced by decodePunycode() , llvm::dlangDemangle() , and removeNullBytes() . &#9670;&#160; insert() void OutputBuffer::insert ( size_t Pos , const char * S , size_t N &#160;) inline Definition at line 194 of file Utility.h . References DEMANGLE_ASSERT , N , and notifyInsertion() . Referenced by decodePunycode() . &#9670;&#160; isInParensInTemplateArgs() bool OutputBuffer::isInParensInTemplateArgs ( ) const inline Returns true if we're currently between a '(' and ')' when printing template args. Definition at line 118 of file Utility.h . References TemplateTracker . &#9670;&#160; isInsideTemplateArgs() bool OutputBuffer::isInsideTemplateArgs ( ) const inline Returns true if we're printing template args. Definition at line 123 of file Utility.h . References TemplateTracker . Referenced by printClose() , and printOpen() . &#9670;&#160; notifyDeletion() virtual void OutputBuffer::notifyDeletion ( size_t , size_t &#160;) inline virtual Called when we make the CurrentPosition of this object smaller. Definition at line 100 of file Utility.h . Referenced by setCurrentPosition() . &#9670;&#160; notifyInsertion() virtual void OutputBuffer::notifyInsertion ( size_t , size_t &#160;) inline virtual Called when we write to this object anywhere other than the end. Definition at line 97 of file Utility.h . Referenced by insert() , and prepend() . &#9670;&#160; operator std::string_view() OutputBuffer::operator std::string_view ( ) const inline Definition at line 86 of file Utility.h . &#9670;&#160; operator+=() [1/2] OutputBuffer &amp; OutputBuffer::operator+= ( char C ) inline Definition at line 145 of file Utility.h . References C() , and OutputBuffer() . &#9670;&#160; operator+=() [2/2] OutputBuffer &amp; OutputBuffer::operator+= ( std::string_view R ) inline Definition at line 136 of file Utility.h . References OutputBuffer() , and Size . &#9670;&#160; operator&lt;&lt;() [1/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( char C ) inline Definition at line 168 of file Utility.h . References C() , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [2/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( int N ) inline Definition at line 186 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [3/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( long long N ) inline Definition at line 170 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [4/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( long N ) inline Definition at line 178 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [5/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( std::string_view R ) inline Definition at line 166 of file Utility.h . References OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [6/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned int N ) inline Definition at line 190 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [7/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned long long N ) inline Definition at line 174 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator&lt;&lt;() [8/8] OutputBuffer &amp; OutputBuffer::operator&lt;&lt; ( unsigned long N ) inline Definition at line 182 of file Utility.h . References N , and OutputBuffer() . &#9670;&#160; operator=() OutputBuffer &amp; OutputBuffer::operator= ( const OutputBuffer &amp; ) delete References OutputBuffer() . &#9670;&#160; prepend() OutputBuffer &amp; OutputBuffer::prepend ( std::string_view R ) inline Definition at line 151 of file Utility.h . References notifyInsertion() , OutputBuffer() , and Size . &#9670;&#160; printClose() void OutputBuffer::printClose ( char Close = ')' ) inline Definition at line 130 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . &#9670;&#160; printLeft() void OutputBuffer::printLeft ( const Node &amp; N ) inline virtual Called by the demangler when printing the demangle tree. By default calls into Node::print {Left|Right} but can be overriden by clients to track additional state when printing the demangled name. Definition at line 6202 of file ItaniumDemangle.h . References N . &#9670;&#160; printOpen() void OutputBuffer::printOpen ( char Open = '(' ) inline Definition at line 125 of file Utility.h . References isInsideTemplateArgs() , and TemplateTracker . &#9670;&#160; printRight() void OutputBuffer::printRight ( const Node &amp; N ) inline virtual Definition at line 6204 of file ItaniumDemangle.h . References N . &#9670;&#160; setCurrentPosition() void OutputBuffer::setCurrentPosition ( size_t NewPos ) inline Definition at line 208 of file Utility.h . References notifyDeletion() . Referenced by llvm::dlangDemangle() , and removeNullBytes() . Member Data Documentation &#9670;&#160; CurrentPackIndex unsigned OutputBuffer::CurrentPackIndex = std::numeric_limits&lt; unsigned &gt;::max() If a ParameterPackExpansion (or similar type) is encountered, the offset into the pack that we're currently printing. Definition at line 104 of file Utility.h . &#9670;&#160; CurrentPackMax unsigned OutputBuffer::CurrentPackMax = std::numeric_limits&lt; unsigned &gt;::max() Definition at line 105 of file Utility.h . &#9670;&#160; InsideTemplate bool OutputBuffer::InsideTemplate = false True if we're currently printing a template argument. Definition at line 113 of file Utility.h . &#9670;&#160; ParenDepth unsigned OutputBuffer::ParenDepth = 0 The depth of '(' and ')' inside the currently printed template arguments. Definition at line 110 of file Utility.h . &#9670;&#160; [struct] struct { ... } OutputBuffer::TemplateTracker Referenced by isInParensInTemplateArgs() , isInsideTemplateArgs() , printClose() , and printOpen() . The documentation for this class was generated from the following files: include/llvm/Demangle/ Utility.h include/llvm/Demangle/ ItaniumDemangle.h Generated on for LLVM by&#160; 1.14.0
2026-01-13T09:30:39
https://www.rabbitmq.com/contact?utm_source=rmq_release-information_tableheader&utm_medium=rmq_website&utm_campaign=tanzu
RabbitMQ: One broker to queue them all | RabbitMQ Skip to main content Join a live RabbitMQ Q&A with the core engineering team and AceMQ , our featured partner. Getting Started Docs Blog Support Commercial Features 4.2 Release series Next 4.2 4.1 4.0 3.13 GitHub Search Support Commercial Tanzu RabbitMQ is developed by VMware Tanzu, which provides exclusive enterprise features and commercial support. This includes 24/7 experts with defined SLAs and longer term support for the latest versions. Learn More |  Contact VMware Tanzu Consulting & Training Engage with our partners who have localized expertise and specialize in tailoring RabbitMQ solutions to the specific needs of your organization. Learn More Community RabbitMQ is an open-source project with an active community of fellow users and contributors. Community support is available on a best-effort basis through the following channels: GitHub |  Discord |  Mailing List |  IRC VMware Tanzu RabbitMQ Commercial RabbitMQ includes both 24/7 support and features not available in the open source version. Commercial Support Support Timelines Enterprise Features Around the clock, around the globe support Support from Core Engineers As the owner of RabbitMQ, VMware Tanzu employs the core engineering team. You get support from the people who build and maintain RabbitMQ — ensuring expert guidance and faster resolution of critical issues. Severity driven SLAs Highest severity issues receive attention within 30 minutes, 24/7/365. Longer Support Timelines Extended support lifecycle with critical patches and CVE fixes for multiple versions. While open source users must stay current, enterprise customers can upgrade on their own schedule. Contribute to the Product Roadmap As an enterprise user, you have direct access to the RabbitMQ product team. You can contribute to the roadmap of RabbitMQ. VMware vSphere We provide commercial support for running RabbitMQ on a variety of platforms. In addition, VMware Tanzu RabbitMQ provides an OVA battle-tested for enterprises running RabbitMQ on vSphere . Release Date of Release End of Community Support End of Commercial Support* 4.2 Oct 2025 31 Jul 2026 30 Oct 2028 4.1 Apr 2025 31 Jan 2026 30 Apr 2028 4.0 Sep 2024 15 Apr 2025 30 Sep 2027 3.13 Feb 2024 18 Sep 2024 31 Dec 2027 3.12 Jun 2023 22 Feb 2024 30 Jun 2025 3.11 Sep 2022 2 Jun 2023 30 Jun 2024 3.10 May 2022 28 Sep 2022 31 Dec 2023 *End of Commercial Support dates are indicative. Official commercial support lifecycle information can be found on the Broadcom support portal . Legend: Latest release series, fully supported Older release series, unsupported Exclusive capabilities supporting your mission-critical apps Multi-Data Center Disaster Recovery Efficient schema and data replication to a second data center supporting promotion of that second site in the event of a disaster. Enterprise Security Advanced security including FIPS 140-2 compliance leveraging OpenSSL3, forward proxy support through OAuth 2.0, and scanning of RabbitMQ and its dependencies for CVEs. Intra-Cluster Compression In a heavily loaded system with high traffic between RabbitMQ nodes, compression can reduce the network load by up to 96%, depending on the nature of the workload. AMQP 1.0 over WebSocket Browser-based applications can communicate with RabbitMQ using AMQP 1.0, making it a practical choice for web-based business applications. Audit Logging VMware Tanzu RabbitMQ on Kubernetes supports audit-logging. Relevant audit events, for example, which user deleted a queue, are collected and logged separately. Talk to our RabbitMQ experts: contact-tanzu-data.pdl@broadcom.com Email Us Partner Spotlight: Announcing AceMQ, our RabbitMQ MSP Partner AceMQ is our featured authorized partner providing End-to-End RabbitMQ solutions, including: white-glove commercial support and expert services for Tanzu RabbitMQ & RabbitMQ Community Edition. To learn more about AceMQ’s RabbitMQ support, managed services, and RabbitMQ consulting & training offerings, please get in touch below: Consulting   Commercial Support Consulting & Training Partners VMware Tanzu’s trusted partners are here to help you in your local market and provide high touch professional services. Americas   Asia Pacific   EMEA Americas ​ AceMQ Offices in: 🇺🇸USA — Supporting organizations worldwide AceMQ is a premier Global RabbitMQ partner offering comprehensive support, training, and consulting services for both RabbitMQ Community Edition and RabbitMQ for Tanzu. As Broadcom’s authorized licensing partner for RabbitMQ, we enable organizations to secure annual or subscription-based licenses under a flexible model, tailored for long-term scalability and compliance. With deep experience in deploying and managing RabbitMQ across cloud, on-premises, hybrid, and Kubernetes environments, AceMQ delivers unmatched expertise in architecture design, high-availability clustering, security hardening, and performance optimization. Our team of RabbitMQ specialists empowers businesses across industries—including finance, healthcare, defense, logistics, and SaaS—to build reliable, scalable messaging infrastructure aligned to mission-critical demands. Whether you're seeking deployment support, upgrade planning, incident resolution, or help transitioning to licensed RabbitMQ, AceMQ ensures your systems operate with maximum resilience and efficiency. From initial assessments to long-term partnerships, we offer flexible engagement models and global coverage. Our services include: RabbitMQ Licensing — Official RabbitMQ for Tanzu and Community Edition licensing partner with tailored MSA terms Architecture & Performance Assessments — In-depth reviews for security, scalability, and reliability 24x7 Global Support — SLA-driven emergency response from senior RabbitMQ experts, with up to guaranteed 15-minute response for critical issueseed, reliability, and scalability Migrations & Upgrades — Seamless transitions across versions and environments, including high-availability clustering Training & Mentorship — Hands-on coaching, workshops, and personalized enablement for technical teams Optimization & Scaling — Advanced tuning for throughput, message durability, and resource efficiency Learn More |  Get In Touch Carahsoft Offices in: 🇺🇸USA — Serving public sector & other highly regulated industries in Canada & USA Carahsoft Technology Corp. is The Trusted Public Sector IT Solutions Provider, supporting US Federal, State and Local Government agencies, Education institutes, Healthcare providers, as well as the Canada Public Sector. Carahsoft partners with thousands of vendors, resellers, system integrators and MSPs to proactively market, sell, and deploy a comprehensive range of IT solutions. Carahsoft can leverage these partnerships to connect your organization with the right team for RabbitMQ projects, ensuring you receive the expertise and support for your project’s success. Assist with procurement and contract management for RabbitMQ Connect customers to RabbitMQ solution providers specializing in public sector use cases across the United States and Canada Provide support to assist government, education, and healthcare sectors with RabbitMQ design, deployments, and implementations Help organizations implement and scale RabbitMQ solutions effectively within required regulatory frameworks. Learn More |  Get In Touch TeraSky Offices in: 🇺🇸USA, 🇮🇱Israel, 🇱🇹Lithuania, 🇬🇧United Kingdom — Serving North America, South America, and Central Europe As a trusted VMware partner, TeraSky specializes in delivering end-to-end RabbitMQ solutions, both on-premises and in the cloud. From architecture design and deployment to comprehensive training and long-term support, we ensure enterprises seamlessly integrate RabbitMQ into mission-critical environments. Our goal is to drive scalability, resilience, and optimal performance. We combine deep technical expertise and flawless execution to solve complex technology challenges with precision, merging enterprise-grade infrastructure with cloud-native agility for maximum impact. TeraSky RabbitMQ Services: Global Presence — All Americas and EMEA - worldwide support. Architecture & Assessments — In-depth design, security, and performance evaluations tailored to optimize RabbitMQ deployments. Migrations & Upgrades — Smooth transitions from legacy systems to the latest RabbitMQ versions, ensuring minimal downtime. 24/7 Managed Services — Ongoing monitoring, expert support, and SLA-backed issue resolution to maintain peak performance. Performance Optimization — Tuning RabbitMQ for high throughput, low latency, and scalability across diverse environments. Training & Mentorship — Hands-on workshops and tailored enablement for development and operations teams, ensuring teams are fully equipped to leverage RabbitMQ’s full potential. Cloud-Native Integration — Full-spectrum RabbitMQ solutions across on-premises, hybrid, and cloud environments, ensuring seamless operations regardless of deployment model. Learn More |  Get In Touch Asia Pacific ​ FiQir Holdings Sdn Bhd Offices in: 🇲🇾Malaysia, 🇳🇿New Zealand — Serving Asia Pacific FiQir Holdings is a trusted technology solutions provider specializing in RabbitMQ consulting, integration, and support. With deep expertise in distributed systems and cloud-native architectures, we help organizations build scalable, high-performance messaging infrastructures that drive operational efficiency and business growth. Our team of specialists ensures seamless deployment, optimization, and ongoing support to keep your RabbitMQ environment running reliably and securely. RabbitMQ Consulting & Architecture – Expert guidance on designing and optimizing messaging systems for high availability and resilience. Deployment & Integration – Seamless RabbitMQ deployments tailored to your cloud or on-premise environment. Performance Tuning & Optimization – Enhancing system throughput, reducing latency, and ensuring optimal scalability. Training & Mentorship – Hands-on coaching to empower your teams with best practices in RabbitMQ management. 24/7 Support & Maintenance – Proactive monitoring, troubleshooting, and expert assistance to minimize downtime. Learn More |  Get In Touch EMEA ​ coders51 Offices in: 🇮🇹Italy At coders51, we provide end-to-end RabbitMQ solutions. As members and founding sponsors of the Erlang Ecosystem Foundation, our deep expertise in Erlang gives us thorough understanding of RabbitMQ's internals - a technology we consider essential for distributed systems. We support companies across the complete RabbitMQ lifecycle: from queue architecture design and resilient client implementation to cluster maintenance and incident management, ensuring optimal system reliability and performance. Design, develop and scale software platforms From legacy software to Cloud Native and Microservices RabbitMQ consultancy, support, and maintenance Training Learn More |  Get In Touch Databorn Offices in: 🇦🇪UAE — Serving Central Asia & Africa Databorn is a trusted Tanzu IT consulting partner offering data-driven solutions for businesses across the banking, insurance, telco and retail sectors in the Middle East & Africa, Eastern Europe, and Central Asia. Our expertise extends to RabbitMQ, helping enterprises build reliable, scalable, and high-performance messaging architectures. Expert consulting on RabbitMQ design, deployment, and best practices Seamless integration and support for optimized messaging workflows Performance optimization to enhance throughput, reliability, and scalability Learn More |  Get In Touch evoila Offices in: 🇩🇪Germany, 🇮🇹Italy, 🇱🇺Luxembourg, 🇦🇹Austria, 🇨🇭Switzerland, 🇭🇷Croatia, 🇸🇰Slovakia, 🇵🇱Poland, 🇧🇦Bosnia and Herzegovina — Serving EMEA Evoila is a leading consultancy specializing in RabbitMQ, both Open Source and Tanzu. With deep expertise in architecture, deployment, and operations, we help organizations design scalable, resilient, and high-performing messaging solutions. Our team provides hands-on support for complex integrations, performance tuning, and best practices for event-driven architectures. From initial setup to 24/7 managed services, we ensure your RabbitMQ environment runs efficiently and reliably. Our Services: Architecture & Assessments 24/7 Managed Services Deployments & Operations Migrations & Upgrades Performance Optimization Learn More |  Get In Touch TeraSky Offices in: 🇮🇱Israel, 🇱🇹Lithuania, 🇬🇧United Kingdom, 🇺🇸USA — Serving Central Europe, North America, and South America As a trusted VMware partner, TeraSky specializes in delivering end-to-end RabbitMQ solutions, both on-premises and in the cloud. From architecture design and deployment to comprehensive training and long-term support, we ensure enterprises seamlessly integrate RabbitMQ into mission-critical environments. Our goal is to drive scalability, resilience, and optimal performance. We combine deep technical expertise and flawless execution to solve complex technology challenges with precision, merging enterprise-grade infrastructure with cloud-native agility for maximum impact. TeraSky RabbitMQ Services: Global Presence — All Americas and EMEA - worldwide support. Architecture & Assessments — In-depth design, security, and performance evaluations tailored to optimize RabbitMQ deployments. Migrations & Upgrades — Smooth transitions from legacy systems to the latest RabbitMQ versions, ensuring minimal downtime. 24/7 Managed Services — Ongoing monitoring, expert support, and SLA-backed issue resolution to maintain peak performance. Performance Optimization — Tuning RabbitMQ for high throughput, low latency, and scalability across diverse environments. Training & Mentorship — Hands-on workshops and tailored enablement for development and operations teams, ensuring teams are fully equipped to leverage RabbitMQ’s full potential. Cloud-Native Integration — Full-spectrum RabbitMQ solutions across on-premises, hybrid, and cloud environments, ensuring seamless operations regardless of deployment model. Learn More |  Get In Touch VLDB Offices in: 🇬🇧United Kingdom — Serving organizations worldwide At VLDB Solutions, we specialise in comprehensive managed services for Tanzu RabbitMQ, ensuring your messaging infrastructure is optimised, secure, and scalable—wherever your business operates. From initial strategy and deployment to continuous monitoring and optimisation, we manage the complexities across multiple regions, allowing your organisation to focus on growth. With 24/7 support, deep RabbitMQ expertise, and tailored solutions, we empower businesses worldwide to maintain high-performance communication systems. What Sets VLDB Solutions Apart? Global Coverage & Expertise — We support businesses across multiple geographies, providing expert RabbitMQ solutions worldwide. Performance & Scalability Optimisation — Advanced configurations to boost throughput, reduce latency, and scale messaging workloads efficiently. Seamless Integration & Customisation — Tailored RabbitMQ deployments to fit your business needs, ensuring smooth integration with databases, cloud services, and microservices architectures. Proactive Monitoring & Incident Prevention — Real-time analytics and predictive monitoring prevent issues before they impact operations. Enterprise-Level Support & Security — We ensure compliance, security, and resilience for businesses in highly regulated industries. Learn More |  Get In Touch Learn about RabbitMQ Getting Started Documentation Blog Reach out to the RabbitMQ team GitHub GitHub Discussions Long Term Commercial Support Contact Us Discord Broadcom VMware Tanzu Terms of Use Privacy Trademark Guidelines Your California Privacy Rights Cookie Settings Copyright © 2005-2026 Broadcom. All Rights Reserved. The term "Broadcom" refers to Broadcom Inc. and/or its subsidiaries.
2026-01-13T09:30:39
https://logtide.dev/docs/architecture#ai:lucide:zap
Architecture | LogTide Docs Docs GitHub Login Get Started Menu Documentation Documentation Getting Started Quick Start Installation No-SDK Setup API Reference Overview Authentication Log Ingestion Log Query Alerts SDKs Overview Node.js Python Go PHP Kotlin C# / .NET Integrations Syslog OpenTelemetry Authentication Overview OpenID Connect LDAP Initial Admin Setup Auth-Free Mode Admin Settings User Management Troubleshooting Dev Testing Migration Overview From Datadog From Splunk From ELK Stack From SigNoz From Grafana Loki Guides Architecture Log Retention Deployment Contributing View on GitHub Documentation Getting Started Quick Start Installation No-SDK Setup API Reference Overview Authentication Log Ingestion Log Query Alerts SDKs Overview Node.js Python Go PHP Kotlin C# / .NET Integrations Syslog OpenTelemetry Authentication Overview OpenID Connect LDAP Initial Admin Setup Auth-Free Mode Admin Settings User Management Troubleshooting Dev Testing Migration Overview From Datadog From Splunk From ELK Stack From SigNoz From Grafana Loki Guides Architecture Log Retention Deployment Contributing View on GitHub Docs Architecture Architecture Understanding LogTide's system architecture and design decisions. System Overview LogTide follows a modern microservices architecture with clear separation of concerns: Data Hierarchy User → Organizations (1:N) → Projects (1:N) → API Keys → Logs Organizations - Top-level isolation for companies/teams. Each user can belong to multiple organizations. Projects - Logical grouping within organizations (e.g., "production", "staging"). Complete data isolation. API Keys - Project-scoped keys for secure log ingestion and query. Prefixed with lp_ . Logs - Time-series data stored in TimescaleDB with automatic compression and retention policies. Technology Stack Backend Runtime: Node.js 20+ Framework: Fastify Language: TypeScript 5 ORM: Kysely (type-safe SQL) Queue: BullMQ + Redis Validation: Zod schemas Frontend Framework: SvelteKit 5 (Runes) Language: TypeScript 5 Styling: TailwindCSS Components: shadcn-svelte Charts: ECharts State: Svelte stores Database RDBMS: PostgreSQL 16 Extension: TimescaleDB Time-series: Hypertables Compression: Automatic Retention: Configurable policies Infrastructure Cache: Redis 7 Proxy: Nginx Container: Docker Orchestration: Docker Compose Monorepo: pnpm workspaces Core Components Backend Server (Fastify) High-performance API server handling log ingestion, query, and management endpoints. Modular architecture with feature-based modules: auth/ - Authentication and user management ingestion/ - Log ingestion with batch support query/ - Log search and filtering alerts/ - Alert rule management dashboard/ - Statistics and aggregations Worker Process (BullMQ) Background job processor for alert evaluation, notifications, and data retention. Runs independently from the main API server. Frontend Dashboard (SvelteKit) Modern, reactive UI with real-time log streaming, search, alerts management, and organization administration. Server-side rendering for optimal performance. TimescaleDB PostgreSQL extension optimized for time-series data. Automatic partitioning, compression, and retention policies for efficient long-term log storage. Data Flow Log Ingestion Flow Client sends logs via POST /api/v1/ingest with API key Backend validates API key and extracts project ID Logs are validated against Zod schema Batch insert into TimescaleDB hypertable Alert evaluator job is triggered (BullMQ) Logs are broadcast to active SSE streams Alert Processing Flow Worker evaluates all enabled alert rules (every minute) For each rule, query logs matching conditions If threshold exceeded, create alert instance Send notifications (email/webhook) via configured channels Update alert status and last triggered timestamp Log Retention LogTide supports customizable log retention policies per organization, allowing administrators to control how long logs are stored before automatic deletion. Retention Configuration Range: 1 to 365 days Default: 90 days Scope: Organization-level (applies to all projects within the organization) Cleanup: Daily at 2:00 AM (server time) Admin Configuration Only system administrators can modify retention settings. This is done through the Admin Panel under Organization Details: Navigate to Admin Panel → Organizations Click on the organization you want to configure Find the "Log Retention Policy" card Enter the desired retention period (1-365 days) Click "Save" to apply the changes User Visibility Regular users can view their organization's retention policy in read-only mode: Navigate to Organization Settings View the "Log Retention Policy" card showing the current retention period Contact your administrator if you need to change the retention policy Cleanup Process The retention cleanup runs as a background worker job: Schedule: Daily at 2:00 AM server time Startup: Also runs 2 minutes after worker starts Process: Deletes logs older than the retention period for each organization Logging: All cleanup operations are logged internally for audit purposes Important Notes Log deletion is permanent and cannot be undone Only the logs table is affected by retention policies Other data (spans, alert history, etc.) follows separate retention rules TimescaleDB's global 90-day policy may still apply as a safety net Key Design Decisions Why TimescaleDB? Native time-series optimizations, automatic compression, built-in retention policies, and PostgreSQL compatibility make it ideal for log storage with high ingestion rates. Why Fastify? Excellent performance, native TypeScript support, schema validation, plugin ecosystem, and lower overhead compared to Express make it perfect for high-throughput log ingestion. Why SvelteKit 5? Modern reactivity with Runes, excellent performance, built-in SSR, file-based routing, and minimal bundle size provide the best developer and user experience. Edit this page on GitHub ON THIS PAGE System Overview Technology Stack Core Components Data Flow Log Retention Key Design Decisions Privacy-first log management. Open source, GDPR compliant, built in Europe. Product Documentation Getting Started SDKs Open Source GitHub AGPLv3 License Report Issue &copy; 2026 LogTide. Built with care in Europe. All systems operational
2026-01-13T09:30:39
https://aws.amazon.com/th/sqs/#aws-page-content-main
ระบบจัดการคิวข้อความแบบเต็มรูปแบบ – Amazon Simple Queue Service – Amazon Web Services ข้ามไปที่เนื้อหาหลัก Filter: ทั้งหมด English ติดต่อเรา AWS Marketplace การสนับสนุน บัญชีของฉัน การค้นหา Filter: ทั้งหมด ลงชื่อเข้าใช้คอนโซล สร้างบัญชี Amazon SQS ภาพรวม ฟีเจอร์ ราคา เริ่มต้นใช้งาน ทรัพยากร เพิ่มเติม ผลิตภัณฑ์ › การผสานรวมแอปพลิเคชัน › Amazon Simple Queue Service รับ 1 ล้านคำขอฟรีด้วย AWS Free Tier Amazon Simple Queue Service การจัดคิวข้อความที่มีการจัดการอย่างเต็มรูปแบบสำหรับไมโครเซอร์วิส ระบบแบบกระจาย และแอปพลิเคชันแบบไม่ต้องใช้เซิร์ฟเวอร์ เริ่มต้นใช้งานฟรี ทำไมต้อง Amazon SQS เรียนรู้เกี่ยวกับการจัดลำดับแบบ First-In-First-Out (FIFO) ว่าสามารถทำให้แน่ใจว่าข้อความที่คุณส่งเข้าระบบถูกเผยแพร่ตามลำดับที่ถูกต้อง เปิดตัว Amazon SQS FIFO Queues (2:04) เล่น ประโยชน์ของ Amazon SQS คำนวณค่าใช้จ่ายได้ง่ายๆ กำจัดค่าใช้จ่ายด้วยการไม่มีค่าใช้จ่ายล่วงหน้าและไม่จำเป็นต้องจัดการซอฟต์แวร์หรือบำรุงรักษาโครงสร้างพื้นฐาน ความเสถียรในทุกขนาด ส่งข้อมูลปริมาณมากอย่างเสถียรในทุกระดับของอัตราการโอนถ่ายข้อมูลโดยไม่สูญเสียข้อความหรือต้องมีบริการอื่นๆ เพื่อพร้อมใช้ การรักษาความปลอดภัย ส่งข้อมูลที่ละเอียดอ่อนระหว่างแอปพลิเคชันอย่างปลอดภัยและจัดการคีย์จากศูนย์กลางโดยใช้ AWS Key Management ความสามารถในการปรับขนาดที่คุ้มค่า ปรับขนาดได้อย่างยืดหยุ่นและคุ้มค่าตามการใช้งาน คุณจึงไม่ต้องกังวลเกี่ยวกับการวางแผนความจุและการจัดสรรไว้ก่อน วิธีทำงาน Amazon Simple Queue Service (Amazon SQS) ช่วยให้คุณส่ง จัดเก็บ และรับข้อความระหว่างส่วนประกอบซอฟต์แวร์ได้ทุกระดับปริมาณ โดยไม่สูญเสียข้อความหรือต้องใช้บริการอื่นๆ กรณีการใช้งาน เพิ่มความเสถียรและขนาดของแอปพลิเคชัน Amazon SQS จัดสรรวิธีการที่เรียบง่ายและเสถียรในการช่วยลูกค้าแยกและเชื่อมองค์ประกอบ (ไมโครเซอร์วิส) โดยใช้คิว แยกไมโครเซอร์วิสและประมวลผลแอปพลิเคชันตามเหตุการณ์ แยกระบบฟรอนต์เอนด์จากแบคเอนด์ เช่น ในแอปพลิเคชันธนาคาร ลูกค้าจะได้รับการตอบรับโดยทันที แต่การจ่ายบิลถูกประมวลผลอยู่ในพื้นหลัง ทำให้แน่ใจว่างานจะเสร็จอย่างคุ้มทุนและตรงเวลา วางงานไว้ในคิวเดียวในขณะที่คนงานคนอื่นๆ ในกลุ่มปรับขนาดอัตโนมัติจะเพิ่มและลดทรัพยากรตามความต้องการเวิร์กโหลดและเวลาแฝง รักษาลำดับข้อความด้วยการกำจัดความซ้ำซ้อน ประมวลผลข้อความขนาดใหญ่โดยที่ยังรักษาความเป็นระเบียบของข้อความไว้อย่างดี เพื่อให้คุณสามารถกำจัดข้อความซ้ำซ้อน เริ่มต้นใช้งาน Amazon SQS ลงชื่อเข้าใช้ Amazon SQS Console ลงชื่อเข้าใช้ สร้างคิว Amazon SQS เรียนรู้เพิ่มเติม ดูคุณสมบัติของ Amazon SQS สำรวจเพิ่มเติม วันนี้คุณพบสิ่งที่กำลังมองหาแล้วหรือยัง การแจ้งให้เราทราบจะช่วยให้เราปรับปรุงคุณภาพของเนื้อหาในหน้าได้ มี ใช้ไม่ได้ สร้างบัญชี AWS เรียนรู้ AWS คืออะไร การประมวลผลบนคลาวด์คืออะไร Agentic AI คืออะไร ฮับแนวคิดการประมวลผลบนคลาวด์ AWS Cloud Security มีอะไรใหม่ บล็อก ข่าวประชาสัมพันธ์ ทรัพยากร เริ่มต้นใช้งาน การฝึกอบรม AWS Trust Center ไลบราลีโซลูชันของ AWS Architecture Center คำถามที่พบบ่อยเกี่ยวกับผลิตภัณฑ์และเทคนิค รายงานการวิเคราะห์ พาร์ทเนอร์ AWS นักพัฒนา Builder Center SDK และเครื่องมือ .NET บน AWS Python บน AWS Java บน AWS PHP บน AWS JavaScript บน AWS ความช่วยเหลือ ติดต่อเรา ยื่นตั๋วแจ้งปัญหา AWS re:Post ศูนย์ความรู้ ภาพรวมของ AWS Support รับความช่วยเหลือจากผู้เชี่ยวชาญ การช่วยการเข้าถึงของ AWS กฎหมาย English กลับขึ้นด้านบน Amazon คือ ผู้ว่าจ้างที่มอบโอกาสอย่างเท่าเทียมให้กับทุกคน ได้แก่ ชนกลุ่มน้อย / สตรี / ผู้พิการ / ทหารผ่านศึก / อัตลักษณ์ทางเพศ / รสนิยมทางเพศ / อายุ x facebook linkedin instagram twitch youtube podcasts email ความเป็นส่วนตัว ข้อกำหนดเว็บไซต์ ค่ากำหนดของคุกกี้ © 2026, Amazon Web Services, Inc. หรือบริษัทในเครือ สงวนลิขสิทธิ์
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/environment/
TIME for Kids | Environment | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Environment Community Protect the Planet October 31, 2025 Start a compost bin. Pick up litter. These are ways to help the environment. Everyone has a part to play. Every action matters. How will you keep the Earth clean and beautiful? Read a few ideas below. What others can… Audio Science Wet Versus Dry March 7, 2025 Rainforests and deserts are two kinds of habitats. They are very different. The biggest difference? Water! Read about how the two compare. The Rainforest Rainforests (above) have tall trees. Most rainforest animals live in them. Vines and moss grow… Audio Spanish Science Where in the World? March 7, 2025 The Amazon is the world’s biggest rainforest. The Sahara is the biggest desert. Read about each of these places. Amazon Rainforest This tropical rainforest (above) is in South America. It is covered with a variety of trees. The climate is… Audio Animals A Rocky Habitat February 28, 2025 Habitats are the natural homes of plants and animals. A habitat has the resources that living things need to survive. One type is a mountain habitat. Read about what makes up a habitat. Climate Climate is the weather pattern… Audio Spanish Animals Explore Habitats February 28, 2025 Habitats can be hot or cold. They can be dry or wet. Each has its own living things. Read about four different habitats below. Arctic This is the coldest habitat (above). Much of the water is ice. Animals have… Audio Science Plant a Tree March 1, 2024 Trees are good for the planet. They make clean air for us to breathe. They give us the shade we need on hot days. Planting trees makes the world a healthier place. They are not just good for people. Birds… Audio Science Tree Study February 2, 2024 Trees come back to life in the spring. They need food and water to grow. Learn how the parts of a tree keep a tree healthy. 1. Leaves Leaves soak up sunlight. This helps the tree grow. 2. Branches… Audio Science From Scraps to Soil February 24, 2023 Compost is a natural fertilizer. It is made from food scraps. They decompose, or break down. Then they can be added to soil. What can you compost? You can compost plant material. This includes vegetable peels and yard trimmings. You… Audio Spanish Science In the Soil February 24, 2023 Domingo Morales is a composter. He started a group. It is called Compost Power. He runs composting sites. He has six in New York City. Collecting Scraps There are many steps to composting. The first is gathering food waste (above).… Audio Science Seeds on the Move April 21, 2022 Most plants start as a seed. Plants spread their seeds. This helps them find new places to grow. Here are ways seeds travel. Blowing in the Wind Some seeds are very light. Some are shaped like wings. Seeds like this… Audio Spanish Posts pagination 1 2 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.ovhcloud.com/es-es/professional-services/
Professional Services | OVHcloud España Abrir menú Webmail Notifications Volver al menú No pending order No pending ticket Área de cliente Volver al menú Área de cliente Mis facturas Mis productos y servicios Mis formas de pago Mis pedidos Mis contactos Mis solicitudes de asistencia Contactar con el equipo comercial Soporte Volver al menú Centro de ayuda Niveles de soporte Professional Services Comunidades Volver al menú OVHcloud Community OVHcloud Blog Eventos Learn Volver al menú Documentación Tutoriales Casos de uso Cumplimiento Casos de éxito Vídeos Stories Training América Latina [$] Volver al menú Europe Deutschland [€] España [€] France [€] Ireland [€] Italia [€] Nederland [€] Polska [PLN] Portugal [€] United Kingdom [£] America Canada (en) [$] Canada (fr) [$] United States [$] América Latina [$] África Maroc [Dhs] Sénégal [FCFA] Tunisie [DT] Oceanía Australia [A$] Asia Singapore [S$] Asia [US$] India [₹] World World [$] World [€] Abrir el menú ¡Bienvenido/a a OVHcloud! Identifíquese para poder contratar soluciones, gestionar sus productos y servicios, y realizar el seguimiento de sus pedidos. Área de cliente Webmail Bare Metal y VPS Volver al menú Bare Metal y VPS Servidores dedicados Volver al menú Servidores dedicados Productos Descubrir nuestros servidores dedicados Servidores dedicados Rise Novedad Los servidores Bare Metal más asequibles de OVHcloud Servidores Advance Novedad Servidores polivalentes para pequeñas y medianas empresas (pymes). Servidores Game Novedad Para videojuegos y plataformas de streaming Servidores Storage Servidores para archivado, backup o almacenamiento distribuido Servidores Scale Novedad Específicamente diseñados para infraestructuras complejas de alta resiliencia Servidores High Grade Nuestros servidores más potentes, optimizados para cargas críticas. Sistemas operativos y aplicaciones Sistemas operativos y aplicaciones adaptados a todos los usos Disponibilidad de nuestros servidores dedicados por región Disponibilidad de nuestros servidores en nuestras diferentes regiones de todo el mundo Bare Metal Wholesale Disfrute de servidores Advance, Scale o High Grade en un rack completo. Casos de uso Resiliencia y AZ Grid computing SAP HANA Virtualización y contenerización Website Business Application Infraestructura hiperconvergente Software-Defined Storage Big data y analítica Archivado y backup IA, machine learning y deep learning Confidential Computing Bases de datos Gaming High Performance Computing Servidores dedicados Eco Volver al menú Servidores dedicados Eco Servidores dedicados Eco Descubrir los servidores dedicados Eco Servidores Kimsufi Servidores a buen precio para lanzar su proyecto Servidores So you Start La gama de servidores dedicados perfecta para startups y pymes Servidores Rise Novedad Reconocidas plataformas Intel y AMD, y alto rendimiento a precios competitivos Distribuciones y sistemas operativos Consulte las versiones compatibles con su servidor Eco Casos de uso Sitio web y aplicación empresarial Servidor de correo Novedad Data storage Novedad VPS - Servidor privado virtual Volver al menú VPS - Servidor privado virtual VPS - Servidor privado virtual Nuestros VPS Novedad Nuestro nuevo VPS con altas prestaciones a precios competitivos, escalabilidad instantánea, seguridad reforzada y disponible en todas nuestras regiones. Distribuciones y licencias Consultar todas nuestras distribuciones y licencias disponibles en VPS Opciones Personalice su VPS con nuestras opciones avanzadas Ayuda Casos de uso Flujos de trabajo automatizados con n8n Novedad Plataforma multisitio WordPress Servidor de videojuegos Servidor de pruebas con VPS Hosting de aplicaciones de trading en un VPS Forex Managed Bare Metal Volver al menú Managed Bare Metal Managed Bare Metal Managed Bare Metal Essentials powered by VMware® Su infraestructura virtual gestionada por OVHcloud Almacenamiento y backup Volver al menú Almacenamiento y backup Almacenamiento y backup Descubrir todas nuestras soluciones Enterprise File Storage Almacenamiento de archivos totalmente administrado y basado en NetApp ONTAP Select NAS-HA Espacio centralizado de almacenamiento o backup para sus datos Cloud Disk Array Solución de almacenamiento en bloque escalable basada en tecnología CEPH Veeam Enterprise Plus La solución que le permite proteger sus datos de la manera que mejor se adapta a sus necesidades HYCU for OVHcloud Simplifique el backup y la migración de sus cargas de trabajo Nutanix Casos de uso Almacenamiento de datos para sus servidores Linux Almacenamiento de datos para sus máquinas virtuales Red Volver al menú Red Red Additional IP Asigne y migre direcciones IP dinámicas de un servicio a otro Load Balancer de OVHcloud Reparta la carga de sus aplicaciones en múltiples servidores backend Red privada (vRack) Conecta todos tus servicios de OVHcloud en una red privada aislada OVHcloud Link Aggregation Red privada redundante con gran capacidad de ancho de banda OVHcloud Connect Conecte su datacenter con OVHcloud Ancho de banda público Aumente el ancho de banda garantizado por defecto CDN Infrastructure Una CDN dedicada como complemento de sus soluciones de OVHcloud Bring Your Own IP (BYOIP) Importe sus direcciones IP y facilite su migración a OVHcloud Seguridad de red Volver al menú Seguridad de red Seguridad de red Infraestructura anti-DDoS Proteja su infraestructura frente a ataques DDoS Game DDoS Protection Proteja sus servicios de gaming y e-sport con una solución anti-DDoS específica DNSSEC Proteja sus datos frente al «cache poisoning» SSL Gateway La forma más sencilla de garantizar la seguridad en su sitio web, ¡sin esfuerzo! Identidad, seguridad y operaciones Volver al menú Identidad, seguridad y operaciones Identidad, seguridad y operaciones Identity and Access Management (IAM) Proteja la gestión de accesos y mejore su productividad Logs Data Platform Plataforma completa para recopilar, almacenar y visualizar sus logs Key Management Service (KMS) Proteja sus datos en todos sus servicios de OVHcloud desde un mismo lugar Secret Manager Gestión profesional de todos tus secretos en una mismo servicio Service Logs Monitorice el rendimiento y la seguridad de su entorno cloud Bare Metal Pod Volver al menú Bare Metal Pod Bare Metal Pod Bare Metal Pod con certificación SecNumCloud El rendimiento del Bare Metal en un entorno soberano y apto para la certificación SecNumCloud Dominio Hosting Email Volver al menú Dominio Hosting Email Dominio web Volver al menú Dominio web Dominio web Busque su dominio Transferir un dominio a OVHcloud Renueve su dominio Consulte el mercado secundario (aftermarket) Acceso rápido Precios de los dominios Whois: buscar información de un dominio Servidor DNS Ofertas especiales Novedad Hosting Volver al menú Hosting Hosting Todos los packs Web Bases de datos adicionales Opción SSL Opción CDN Acceso rápido ¿Cómo crear un sitio web? Aloje su sitio web WordPress Cree su sitio web en un clic Cree su tienda online Correo electrónico y soluciones colaborativas Volver al menú Correo electrónico y soluciones colaborativas Correo electrónico y soluciones colaborativas Todas los productos de correo Todos los productos Exchange Licencias Office 365 SMS Volver al menú SMS SMS Envíe sus mensajes con SMS Profesional Public Cloud Volver al menú Public Cloud Public Cloud Volver al menú Public Cloud Public Cloud Descubrir Public Cloud Ecosistema de servicios Acelere su actividad y automatice su infraestructura Tarificación Descubra nuestros precios transparentes y asequibles Prueba gratuita de Public Cloud Pruebe nuestras soluciones sin incurrir en gastos Savings Plans Novedad Disfrute de precios reducidos al contratar sus recursos Public Cloud con un compromiso de entre 1 y 36 meses Ventajas de la solución Public Cloud Conozca las soluciones de cloud computing adaptadas a sus necesidades Cloud Computing Aprenda más sobre esta tecnología cada vez más extendida Conformidad y certificaciones Descubra cómo OVHcloud construye soluciones de confianza Zonas de confianza Despliegue sus datos estratégicos en zonas con protección especial Disponibilidad por regiones Descubra la cobertura de nuestro catálogo por región Local Zones Novedad Despliega servicios cloud más cerca de tus usuarios Documentación Consulte nuestras guías y tutoriales Compute Volver al menú Compute Compute Descubrir todas nuestras soluciones Compute Virtual Machine Instances Disfrute de instancias polivalentes y adaptadas a todos sus usos Cloud GPU Acelere sus cargas de trabajo con instancias GPU de alto rendimiento Metal Instances Combine la potencia del bare metal con la automatización del cloud Documentación Consulte la documentación de nuestra gama Compute Hacia el PaaS Concéntrese en sus aplicaciones y mejore su competitividad Local Zone Novedad Despliega servicios cloud más cerca de tus usuarios Storage Volver al menú Storage Storage Descubrir todas nuestras soluciones Storage Block Storage Cree volúmenes de almacenamiento y utilícelos como discos adicionales Object Storage Disfrute del almacenamiento ilimitado bajo demanda, compatible con S3 Cold Archive Archivado muy económico para datos de acceso muy poco frecuentes. Local Zone Novedad Despliega servicios cloud más cerca de tus usuarios Documentación Consulte la documentación de la gama Storage Network Volver al menú Network Network Descubrir todas nuestras soluciones Network Private Network Despliegue redes privadas en el vRack de OVHcloud Load Balancer Gestione el tráfico variable distribuyéndolo entre varios recursos Floating IP Asigne y transfiera su IP pública de un servicio a otro Gateway Gestione un punto de conexión único entre su red privada e internet Documentación Consulte la documentación de la gama Network Containers &amp; Orchestration Volver al menú Containers &amp; Orchestration Containers &amp; Orchestration Descubrir todas nuestras soluciones Containers &amp; Orchestration Managed Kubernetes Service Orqueste sus aplicaciones de contenedores con un cluster Kubernetes certificado por la CNCF Load Balancer for Managed Kubernetes Service Gestione las fluctuaciones de su actividad equilibrando el tráfico en los diferentes recursos Managed Rancher Service Novedad Una gestión centralizada y simplificada de sus clústeres Kubernetes Managed Private Registry Gestione sus imágenes de contenedores y charts Helm en un registro privado seguro Documentación Consulte la documentación de la gama Containers &amp; Orchestration Hacia el PaaS Concéntrese en sus aplicaciones y mejore su competitividad Databases Volver al menú Databases Databases Descubrir todas nuestras soluciones Databases MongoDB Motor NoSQL orientado a documentos. Probar la versión gratuita MySQL La popular base de datos relacional adaptada a sus necesidades PostgreSQL El motor de bases de datos relacionales open source de referencia Valkey El almacenamiento en memoria inteligente Nuestra documentación Consulte la documentación de la gama Databases Hacia el PaaS Concéntrese en sus aplicaciones y mejore su competitividad Analytics Volver al menú Analytics Analytics Descubrir todas nuestras soluciones Analytics Kafka Solución de «queuing» para implementar arquitecturas «event-driven» Kafka Connect Extensión para simplificar la ingestión de sus fuentes hacia Apache Kafka Kafka MirrorMaker Replicación para garantizar la alta disponibilidad de sus clústeres Kafka Logs Data Platform Plataforma completa para recopilar, almacenar y visualizar sus logs OpenSearch Motor dedicado para indexación, búsqueda y análisis de datos ClickHouse Novedad El análisis ultra-rápido de tus datos al alcance de la mano Managed Dashboards Plataforma Grafana para crear dashboards Documentación Consulte la documentación de la gama Analytics Hacia el PaaS Concéntrese en sus aplicaciones y mejore su competitividad Data Platform Novedad Volver al menú Data Platform Data Platform Descubrir todas nuestras soluciones Data Platform Descubrir Data Platform de OVHcloud Novedad Ponga en marcha sus proyectos Data &amp; Analytics fácilmente y en tiempo récord Data Catalog Novedad Más de 50 conectores para todas sus fuentes de datos Lakehouse Manager Novedad Almacenamiento «data warehouse» y «data lake» unificado basado en Apache Iceberg Data Processing Engine Novedad Automatice la ejecución y la orquestación de sus workloads ETL/ELT Analytics Manager Novedad Cree sus dashboards y realice sus consultas con el motor Trino Application Services Novedad SDK y servicios «serverless» para desplegar sus API y aplicaciones de datos Control Center Novedad Monitorice las métricas, gestione los logs y las alertas de sus entornos AI &amp; Machine Learning Volver al menú AI &amp; Machine Learning AI &amp; Machine Learning Descubrir todas nuestras soluciones AI &amp; Machine Learning AI &amp; Quantum Notebooks Inicie sus notebooks Jupyter o VS Code en el cloud y elija entre nuestros frameworks AI o cuánticos nativos AI Training Entrene sus modelos de inteligencia artificial AI Deploy Despliegue modelos de machine learning y obtenga predicciones AI Endpoints Novedad Potencia tus aplicaciones con modelos de IA generativa gracias a las API estándar y seguras. Documentación Consulte la documentación de la gama AI &amp; Machine Learning Hacia el PaaS Concéntrese en sus aplicaciones y mejore su competitividad Quantum Computing Volver al menú Quantum Computing Quantum Computing Descubrir todos nuestros productos Quantum Computing Quantum Emulators Novedad Simula tus algoritmos cuánticos en notebooks listos para usar Quantum Processing Units (QPU) Novedad Accede a ordenadores cuánticos a través de nuestra plataforma Quantum ¿Qué es la computación cuántica? Descubra la nueva revolución en aceleración de la computación y cómo empezar a desarrollar hoy en los ordenadores cuánticos del mañana Identity, Security &amp; Operations Volver al menú Identity, Security &amp; Operations Identity, Security &amp; Operations Descubrir todas nuestras soluciones Identity, Security &amp; Operations Identity and Access Management (IAM) Proteja la gestión de accesos y mejore su productividad Logs Data Platform Plataforma completa para recopilar, almacenar y visualizar sus logs Key Management Service (KMS) Proteja los datos en todos sus servicios de OVHcloud desde un único lugar Secret Manager Gestión profesional de todos tus secretos en una mismo servicio Services Logs Monitorice el rendimiento y la seguridad en su entorno cloud Hosted Private Cloud Volver al menú Hosted Private Cloud VMware Volver al menú VMware VMware on OVHcloud Descubrir VMware on OVHcloud Public VCF as a Service Novedad Solución VMware compartida y administrada, optimizada con tecnología de VMware Cloud Foundation Managed VMware vSphere Solución VMware administrada para todas las empresas Managed VMware vSphere con certificación SecNumCloud Solución VMware en una zona de confianza (Trusted Zone) acreditada por la ANSSI Soluciones Comparar las soluciones VMware SAP on OVHcloud Extensión y migración de datacenter Soluciones de cloud híbrido y multicloud Soluciones de recuperación ante desastres Soluciones Zonas de Confianza europeas Ver todas las soluciones Nutanix Volver al menú Nutanix Hosted Private Cloud NC2 on OVHcloud Novedad Nutanix Cloud Clusters (NC2) on OVHcloud Nutanix on OVHcloud Nuestra plataforma hiperconvergente (HCI) Nutanix escalable y lista para usar Bare Metal Pod con certificación SecNumCloud Novedad Servidores certificados por Nutanix disponibles en Bare Metal Pod, con certificación SecNumCloud HYCU for OVHcloud Simplifique el backup y la migración de sus cargas de trabajo Nutanix Veeam Enterprise para todos sus backups Solución dedicada Veeam Backup &amp; Replication para todos sus backups Casos de uso Migración y gestión de sus datos Plan de recuperación ante desastres (DRP) Hiperconvergencia, ahorro y huella ecológica Disaster Recovery (DRaaS) SAP HANA Volver al menú SAP HANA SAP HANA SAP HANA on Private Cloud La solución que facilita sus despliegues SAP en un cloud soberano Soluciones SAP on OVHcloud Almacenamiento y backup Volver al menú Almacenamiento y backup Almacenamiento y backup Descubra todas las soluciones de almacenamiento Nuestro servicio Veeam para backup de VMware Solución Veeam Backup Managed para el backup de sus máquinas virtuales Opción Zerto para planes de recuperación ante desastres de VMware Solución de plan de recuperación ante desastres (DRP) multisitio para sus clústeres VMware Solución Veeam para Public VCF as a Service Solución dedicada Veeam Backup &amp; Replication para todos sus backups Veeam Enterprise - Licencias Solución dedicada Veeam Backup &amp; Replication para todos sus backups HYCU for OVHcloud Simplifique el backup y la migración de sus cargas de trabajo Nutanix Object Storage Disfrute del almacenamiento ilimitado bajo demanda, compatible con S3 Cold Archive Archive sus datos a largo plazo al mejor precio NetApp - Enterprise File Storage Almacenamiento de archivos totalmente administrado con tecnología NetApp ONTAP Select Casos de uso Backup y recuperación ante desastres Continuidad del negocio Recuperación ante desastres para Managed VMware vSphere Recuperación ante desastres para Nutanix on OVHcloud Red Volver al menú Red Red Additional IP Asigne y migre direcciones IP dinámicas de un servicio a otro Load Balancer de OVHcloud Reparta la carga de sus aplicaciones en múltiples servidores backend Red privada (vRack) Conecta todos tus servicios de OVHcloud en una red privada aislada OVHcloud Connect Conecte su datacenter con OVHcloud CDN Infrastructure Una CDN dedicada como complemento de sus soluciones de OVHcloud Bring Your Own IP (BYOIP) Importe sus direcciones IP y facilite su migración a OVHcloud Seguridad de red Volver al menú Seguridad de red Seguridad de red Infraestructura anti-DDoS Proteja su infraestructura frente a ataques DDoS DNSSEC Proteja sus datos frente al «cache poisoning» SSL Gateway La forma más sencilla de garantizar la seguridad en su sitio web, ¡sin esfuerzo! Identidad, seguridad y operaciones Volver al menú Identidad, seguridad y operaciones Identidad, seguridad y operaciones Identity and Access Management (IAM) Proteja la gestión de accesos y mejore su productividad Logs Data Platform Plataforma completa para recopilar, almacenar y visualizar sus logs Key Management Service (KMS) Proteja sus datos en todos sus servicios de OVHcloud desde un mismo lugar Secret Manager Gestión profesional de todos tus secretos en una mismo servicio Service Logs Monitorice el rendimiento y la seguridad de su entorno cloud Conformidad y certificaciones Volver al menú Conformidad y certificaciones Conformidad y certificaciones Lista completa de normas y reglamentaciones RGPD Conformidad con el Reglamento (UE) 2016/679 sobre protección de datos SecNumCloud Calificación del visado de seguridad de la ANSSI HDS y alojamiento de datos de salud Alojamiento de datos de salud en Europa HIPAA e HITECH Alojamiento de datos de salud en Estados Unidos PCI DSS Alojamiento de datos bancarios ISO/IEC 27001, 27017 y 27018 Gestión de la seguridad de la información ISO/IEC 27701 Gestión de la seguridad del tratamiento de datos personales ISO 50001 Controlar el desempeño energético SOC 1, 2 y 3 Certificación e informes AICPA SSAE 16/ISAE 3402 de tipo II EBA y ACPR Conformidad para los operadores de servicios financieros en Europa G-Cloud Prestación de servicios cloud para el sector público en Reino Unido Soluciones Volver al menú Soluciones Casos de uso Volver al menú Casos de uso Casos de uso Migración al cloud Cloud híbrido y multicloud Modernización de aplicaciones Aplicaciones nativas de cloud Inteligencia artificial Analítica de big data Gestión de datos Cargas de trabajo de alto rendimiento Almacenamiento de grandes conjuntos de datos Grid Computing Migración a PaaS Backup y recuperación ante desastres Continuidad del negocio Trusted Zone Entorno SecNumCloud Protección de red Seguridad cloud Extensión y migración de datacenter Transformación de datacenter Consolide su reputación de marca Garantice su estabilidad financiera Proteja su negocio frente a ciberamenazas Industria Volver al menú Industria Industria Sector público Solución de confianza para las instituciones y las administraciones públicas Salud Solución de confianza para el sector de la salud Servicios financieros Soluciones para los operadores de servicios financieros Sector industrial Solución cloud de confianza para la industria europea E-commerce Alojamiento web para el comercio electrónico Software y tecnologías de la información Soluciones SaaS y PaaS de proveedores de software partners de OVHcloud Gaming Soluciones cloud para empresas y actores del sector del videojuego Blockchain Soluciones de OVHcloud para impulsar sus proyectos de blockchain Tipo de negocio Volver al menú Tipo de negocio Tipo de negocio Empresas Soluciones para impulsar la transformación digital de las empresas Editores de software (SaaS/PaaS) Soluciones SaaS y PaaS de proveedores de software partners de OVHcloud Integradores de sistemas Soluciones para integradores, administradores de TI y empresas de consultoría Instituciones y administraciones públicas Soluciones de confianza para las instituciones y las administraciones públicas Startups Soluciones de apoyo para startups Scaleups Soluciones de apoyo para scaleups Tecnología Volver al menú Tecnología Tecnología Veeam Proteja sus datos con las soluciones Veeam de OVHcloud VMware by Broadcom Soluciones VMware by Broadcom y OVHcloud para todos sus proyectos Nutanix Acelere y simplifique su viaje hacia un multicloud híbrido con la solución Nutanix on OVHcloud HYCU La solución de backup elegida por los usuarios Nutanix SAP Soluciones SAP on OVHcloud para el alojamiento de entornos SAP en un cloud soberano NetApp Soluciones de almacenamiento NetApp con control de costes y alto rendimiento Nvidia Soluciones GPU de Nvidia para acelerar sus proyectos de innovación e IA MongoDB Soluciones MongoDB para simplificar la gestión de datos OpenStack Soluciones OpenStack integradas en OVHcloud para sus infraestructuras cloud Intel Soluciones de expertos para acelerar con Intel® Xeon® en la nube AMD Soluciones cloud de alta gama con procesadores AMD Hadoop Cloudera Solución Cloudera 100 % administrada con Claranet Ecosistema Volver al menú Ecosistema Ecosistema Descubra el ecosistema de partners de OVHcloud Partner Program Una iniciativa especialmente creada para nuestros partners revendedores, integradores, proveedores de servicios administrados y asesoramiento Open Trusted Cloud Un ecosistema de soluciones SaaS y PaaS acreditadas y alojadas en nuestro cloud abierto, reversible y fiable Startup Program Un programa de apoyo a startups y scaleups para acelerar su crecimiento OVHcloud Labs El espacio de innovación en el que podrá probar nuestra tecnología de vanguardia antes de su lanzamiento oficial al mercado. Eventos del ecosistema Conozca todos los eventos de nuestro ecosistema de partners: webinars, conferencias, etc. OVHcloud Ecosystem Awards Descubra cómo nuestros OVHcloud Ecosystem Awards premian cada año a los líderes de nuestro ecosistema por categoría Formación y certificación Desarrolle sus conocimientos con las diferentes formaciones y certificaciones disponibles para miembros del OVHcloud Partner Program. Acceso rápido Encontrar un partner Participar en el OVHcloud Partner Program Participar en el OVHcloud Startup Program Comparador de precios Portal de partners FAQ Partner Program ¿Quiénes somos? Volver al menú ¿Quiénes somos? ¿Quiénes somos? Quiénes somos Actualidad Infraestructura mundial Nuestros datacenters Nuestras Local Zones Backbone: Únete a la aventura Patent Pledge Legal Protección de datos - RGPD Soberanía de los datos Nuestros compromisos Innovación Cloud sostenible Cloud de confianza Impact Tracker Medioambiental Summit Open search bar Close search bar No hay resultados. Productos Soluciones Partners Documentación Artículos Ver todos los resultados Professional Services Professional Services Professional Services de OVHcloud Los Professional Services de OVHcloud le ofrecen consejos técnicos y buenas prácticas para todos sus proyectos de transformación al cloud Contacte con nosotros Vista general Vista general Casos de uso Casos de uso Tecnologías Tecnologías Testimonios Testimonios Colaboradores Colaboradores Contacte con nosotros Toda la experiencia de OVHcloud al servicio de su transformación Nuestros Professional Services se vertebran en tres ejes principales de servicios con valor añadido Soporte técnico Los Professional Services de OVHcloud le ofrecen consejos técnicos y buenas prácticas para todos sus proyectos de transformación al cloud.   Prestaciones Los Professional Services de OVHcloud simplifican sus proyectos de modernización y migración al cloud, aportando así un gran valor añadido a su empresa. También podemos recomendar partners de confianza para obtener resultados óptimos en entornos cloud y on-premises. Formación Los Professional Services de OVHcloud ofrecen sesiones de formación personalizada, así como una gama de cursos disponibles en nuestro catálogo en línea. Acceder al catálogo de formación La reproducción de vídeos en YouTube está sujeta a la aceptación de las herramientas de rastreo que la plataforma utiliza para ofrecerle publicidad personalizada basada en su navegación. Para poder ver el vídeo, deberá aceptar las cookies de uso compartido en plataformas de terceros en la configuración de cookies de OVHcloud. Puede retirar su consentimiento en cualquier momento. Para más información, consulte las políticas de cookies de YouTube y de OVHcloud . Show Privacy Center OVHcloud Professional Services: ¡simplifique su migración y amplíe su actividad! Offres Cloud Migración al cloud Obtenga asesoramiento personalizado sobre la planificación y la implementación de una migración, teniendo en cuenta todas sus necesidades en materia de seguridad, resiliencia y recuperación ante desastres. Cloud híbrido y multicloud Diseñe y construya sus soluciones híbridas y multicloud con la ayuda de nuestros arquitectos de soluciones cloud, a través de un POC de consultoría.   Infraestructuras cloud modernas Descubra las buenas prácticas de gestión, optimización y protección de su infraestructura cloud.       Modernización y desarrollo de aplicaciones Optimice el ciclo de vida de sus desarrollos gracias a las mejores prácticas de DevOps para permitir una modernización más rápida de las aplicaciones, una integración continua y un servicio eficaz en el cloud. Datos e IA Aproveche la información basada en datos y tecnologías de IA para acelerar el crecimiento de su empresa, mejorar la toma de decisiones y estimular la innovación. Tecnologías de experiencia clave con Professional Services Wiremind recomienda los Professional Services de OVHcloud Los Professional Services ayudaron a Wiremind a adquirir los conocimientos necesarios para obtener el mejor rendimiento de almacenamiento de nuestros servidores dedicados. Cédric De St Martin, Operaciones VP/SRE de Wiremind Póngase en contacto con nosotros para obtener asesoramiento profesional Solicite un análisis personalizado de su proyecto a nuestros expertos Contactar con OVHcloud Alcanzar el éxito con los partners expertos de OVHcloud Expertos especializados para cubrir todas sus necesidades OVHcloud es un proveedor de recursos cloud con una sólida red de partners para ayudarle en todos los proyectos de su empresa. Una experiencia optimizada Nuestros partners le ofrecerán una experiencia óptima para que pueda sacar el máximo partido a todas sus soluciones de OVHcloud. Competencias complementarias En OVHcloud aportamos todo nuestro conocimiento y experiencia sobre las tecnologías y los procesos empresariales para completar el catálogo de servicios de nuestros partners. Acceder al directorio de partners de OVHcloud FAQ ¿En qué consisten los Professional Services? Los Professional Services están compuestos por equipos de expertos y formadores de OVHcloud al servicio de nuestros clientes y partners. Se trata de un completo centro de competencias que ofrece asesoramiento sobre entornos cloud y que se basa en una gran variedad de soluciones, tecnologías y servicios. Estos Professional Services proporcionan a las empresas servicios a medida para todos sus proyectos de transformación y ponen en marcha diferentes estrategias al servicio del crecimiento y la competitividad de nuestros clientes y partners. ¿Los Professional Services están disponibles en todas las soluciones de OVHcloud? Sí, los Professional Services trabajan con todas las soluciones disponibles en OVHcloud, tanto en cloud privado como público. Nuestros expertos también dominan las diferentes tecnologías disponibles en el mercado de la informática y del cloud. Por lo tanto, podrán asesorarle sobre entornos legacy o nativos cloud utilizando una moderna metodología adaptada a su caso particular. ¿Intervienen estos servicios en los entornos? Los equipos de Professional Services intervienen como expertos técnicos. Así pues, le guiarán a través de todas las etapas y le proporcionarán asesoramiento adaptado para garantizar el éxito de su proyecto. En función de sus características, también podrán recomendarle empresas colaboradoras (partners) que puedan ofrecerle un soporte avanzado o servicios de administración de sus infraestructuras. ¿En qué idiomas está disponible este servicio? Los expertos de nuestros Professional Services podrán prestarle asesoramiento y formación tanto en inglés como en francés. No obstante, si necesita ayuda en otros idiomas, puede recurrir a nuestra red de partners. Back to top Herramientas Mi cuenta de cliente Webmail API Procedimientos Listas de correo Status OVHcloud Whois Contacto de dominio Informar de una infracción (abuse@ovh.net) Solicitud de divulgación de datos Whois Propiedad intelectual Marcas Soporte Centro de ayuda Guías Centro de aprendizaje Glosario Community Niveles de soporte Atención al cliente OVHcloud Lunes a viernes de 8:00 a 18:00 91 758 34 77 Coste de llamada nacional Noticias Espacio de prensa Blog Redes sociales Sigamos en contacto © Copyright 1999-2026 OVH SAS. Aviso legal Contratos Protección de datos Gestionar las cookies Política de cookies Derechos y obligaciones de los titulares de dominios Documentación de la ICANN para los titulares de dominios Pagos Mapa del sitio Quiénes somos Empleo OVHcloud A partir del 1 de enero de 2015, con arreglo a la Directiva 2006/112/CE modificada, los precios IVA incluido pueden variar según el país de residencia del cliente (por defecto, los precios con IVA incluyen el IVA español vigente).
2026-01-13T09:30:39
https://www.timeforkids.com/g2/?age=child
TIME for Kids | Articles | G2 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Articles K-1 2 3-4 5-6 United States No More Pennies December 22, 2025 Have you ever picked up a penny for good luck? The coins will soon be harder to find. That’s because the United States is no longer making them. The last batch of pennies was made on November 12. Pennies… Audio Spanish Science Dino Skin December 22, 2025 Paleontologists made an exciting discovery. They found two duck-billed dinosaur “mummies” in Wyoming. The fossils are about 66 million years old. They’re called mummies because they’re well-preserved. You can clearly see their skin and spikes. The scientists are from the… Audio Arts Super Surprise December 22, 2025 A Superman comic book recently sold for more than $9 million. It’s the most expensive comic ever sold. That’s according to Heritage Auctions, an auction house in Texas. Three brothers found the comic book. They were cleaning out their mother’s… Audio Entertainment Chaotic Comeback December 22, 2025 Winter is here, and so is Greg Heffley—on a screen near you! The popular book character is back in a new film. It’s on Disney+. Greg is bringing the trouble in Diary of a Wimpy Kid: The Last Straw. … Audio Entertainment Scaredy Sponge December 22, 2025 The SpongeBob Movie: Search for SquarePants is the newest film in the Bikini Bottom universe. It hit theaters last month. SpongeBob is growing up. He’s trying to face his fears. He wants to be seen as a brave “big… Audio World Winter Athletes December 22, 2025 The 2026 Winter Olympics are right around the corner. The Games begin on February 6. The Paralympics start a month later. Paralympics events are for athletes with disabilities. The Olympics and Paralympics will take place across Northern Italy. Here are… Audio Spanish World Gearing Up December 22, 2025 The Winter Olympic and Paralympic Games take place every four years. In 2026, they will be held in Italy. Here are some things to know. What are the Winter Olympics and Paralympics? The Winter Olympics is a competition… Audio Science Winter Chills December 19, 2025 “Oh, the weather outside is frightful” Is that line familiar? It comes from a popular winter song. The song refers to snow. But it could be about another weather event: windchill. Windchill is a weather term you might hear this… Audio Spanish Science Winter Weather Terms December 19, 2025 How do you learn about the weather? You can check the forecast online or hear it on TV or on the radio. Following the weather can help you stay prepared. But sometimes, it can be hard to understand. Here are… Audio Technology Best Inventions of 2025 December 12, 2025 Every year, TIME magazine lists the year’s best inventions. Some of these are tech tools. Some use artificial intelligence (AI). Others are cool toys and games. TIME for Kids has chosen eight here. Which one tops your list? High-Tech Helper… Audio Spanish Animals Bakso’s Birthday November 5, 2025 You’ve been to a birthday party for a person. How about for a tiger? Bakso is a Sumatran tiger. He lives at Disney’s Animal Kingdom. That’s in Florida. Bakso turned 1 on September 26. The park staff threw him a… Audio Spanish Entertainment Love and Loss November 5, 2025 Charlotte’s Web is a classic novel by E.B. White. Now it’s also an animated miniseries. True to the book, there’s some sadness in the show. But there’s a hopeful ending. The three-part series is available on HBO Max. Luke… Audio Entertainment Brighter Together November 5, 2025 In All the Stars in the Sky, Clay can’t wait to be the star of the week. He thinks that will make him the most important person in the school. Then his elisi, or grandma, teaches him that he’ll never… Audio Arts Mail Art October 31, 2025 Getting a letter in the mail feels special. And every piece of mail needs a postage stamp. The United States Postal Service (USPS) makes stamps. Derry Noyes is an art director there. She designs stamps. Read about her job in… Audio Spanish Posts pagination 1 2 3 4 5 &hellip; 50 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.facebook.com/login/?next=https%3A%2F%2Fl.facebook.com%2Fl.php%3Fu%3Dhttps%253A%252F%252Fwww.instagram.com%252F%26amp%253Bh%3DAT3SxLgZ0RtQGzHCtGi_noORrbdBDH9PbT6hTC7oHmU9wDU4gG8cA0AxLxgXvMdRoCWxk7HuG4PbvJ5OChYIx1pDQzOghxRSSPME0SMPvWyqhy1Me69LyPvY38BHQXkFUd4lpNvXjqBznR67
Facebook Facebook 이메일 또는 휴대폰 비밀번호 계정을 잊으셨나요? 새 계정 만들기 일시적으로 차단됨 일시적으로 차단됨 회원님의 이 기능 사용 속도가 너무 빠른 것 같습니다. 이 기능 사용에서 일시적으로 차단되었습니다. Back 한국어 English (US) Tiếng Việt Bahasa Indonesia ภาษาไทย Español 中文(简体) 日本語 Português (Brasil) Français (France) Deutsch 가입하기 로그인 Messenger Facebook Lite 동영상 Meta Pay Meta 스토어 Meta Quest Ray-Ban Meta Meta AI Meta AI 콘텐츠 더 보기 Instagram Threads 투표 정보 센터 개인정보처리방침 개인정보 보호 센터 정보 광고 만들기 페이지 만들기 개발자 채용 정보 쿠키 AdChoices 이용 약관 고객 센터 연락처 업로드 및 비사용자 설정 활동 로그 Meta © 2026
2026-01-13T09:30:39
https://shanghai.dacheng.com/News_2/704.html
大成律师助力浦东创投完成对骄英医疗A+轮投资-北京大成(上海)律师事务所 --> 首页 关于我们 关于我们 获奖案例 新闻资讯 大成资讯 大成业绩 大成活动 大成荣誉 业务领域 专业分类 行业覆盖 寻找律师 大成研究 实践指南 政策解析 专业通讯 新法速递 案例评析 时事评论 党群文化 党建 群团 人物 工作机会 EN 新闻资讯 新闻资讯 大成资讯 大成业绩 大成活动 大成荣誉 大成律师助力浦东创投完成对骄英医疗A+轮投资 发布日期:2025-11-26 近日,骄英医疗器械(上海)有限公司(简称“骄英医疗”)宣布完成A+轮融资,由上海浦东创业投资有限公司(简称“浦东创投”)投资。杨春宝律师团队在本轮交易中代表浦东创投提供了全程法律服务,包括法律尽职调查、全套交易文件的起草与谈判等,为浦东创投顺利完成本次投资保驾护航。 &nbsp; 浦东创投成立于1997年,是一家专注于科技创新领域的专业创业投资机构。浦东创投长期致力于发掘和培育具有核心技术与创新能力的成长型企业,通过市场化、专业化的投资运作,推动科技成果转化与产业升级。在投资布局上,浦东创投重点关注生物医药、高端制造、新一代信息技术等战略新兴产业,已投资扶持了一批在各自领域具有领先地位的创新企业。其生物医药领域投资涵盖了创新医疗器械、生物技术及数字医疗等多个细分方向,形成了较为完整的产业投资生态。 &nbsp; 骄英医疗成立于2018年,是一家专注于生物固定全膝及单髁膝关节产品研发与产业化的医疗器械企业。骄英医疗致力于提升国产关节假体的长期稳定性与患者适配性,在生物固定技术方面拥有自主研发能力。本轮融资将有力推动骄英医疗核心技术的临床转化与市场拓展,为公司下一阶段的产品注册与产能建设提供坚实支撑。在国产替代加速的背景下,此次融资也彰显了资本市场对高端医疗器械创新企业的持续看好。 &nbsp; 上一篇: 返回列表 下一篇: 大成律师担任预重整临时管理人、破产管理人的威某汽车科技集团有限公司等四家关联企业申请破产重整案入选最高院入库参考案例 相关律师 +86 21-5878-5888 关于我们 新闻资讯 业务领域 寻找律师 大成研究 诚挚感谢为大成上海官网提供照片的同仁们! Copyright © 2024大成版权所有 保留所有权利 All Rights Reserved 2024 京ICP备18048582号-2 京公网安备:11010502053550号 Beijing Dacheng Law Offices, LLP (“大成”) is an independent law firm, and not a member or affiliate of Dentons. 大成 is a partnership law firm organized under the laws of the People's Republic of China, and is Dentons' Preferred Law Firm in China, with offices in more than 50 locations throughout China. Dentons Group (a Swiss Verein) (“Dentons”) is a separate international law firm with members and affiliates in more than 160 locations around the world, including Hong Kong SAR, China. For more information, please see dacheng.com/legal-notices or dentons.com/legal-notices.
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/people/
TIME for Kids | People | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit People World TIME Person of the Year January 2, 2026 TIME’s 2025 Person of the Year isn’t a person. It’s a group: the architects of artificial intelligence. This group includes AI innovators such as OpenAI CEO Sam Altman, Nvidia’s Jensen Huang, and Lisa Su, the CEO of Advanced Micro Devices.… Audio Arts Misty’s Moment November 6, 2025 On October 22, Misty Copeland took her final bow at the David H. Koch Theater, in New York City’s Lincoln Center. In the cheering audience that evening was the ballerina’s 3-year-old son, Jackson. “He got to see me dance for… Audio Spanish Environment Jane Goodall: A Champion of Conservation October 10, 2025 Jane Goodall has died at the age of 91. She was the world’s most famous primatologist (a person who studies apes). In a social-media post announcing her death on October 1, the Jane Goodall Institute called her “a tireless advocate… Audio World 102 and Climbing September 11, 2025 Climber Kokichi Akuzawa has become the oldest person to reach the summit of Japan’s Mount Fuji. The 102-year-old man set this Guinness World Record on August 5. Akuzawa made the trek with his daughter, his granddaughter and her husband, and… Audio United States Super Speller June 3, 2025 On May 29, a 13-year-old from Allen, Texas, became the 2025 Scripps National Spelling Bee champ. Faizan’s winning word was éclaircissement, a French word that means “clarification.” When Faizan spelled it correctly, he pumped his fists and fell dramatically to… Audio World Passing of a Pope May 1, 2025 Pope Francis passed away at age 88 on April 21. The Vatican said Francis died after a stroke. It was one day after his Easter Sunday appearance at Vatican City’s St. Peter’s Square, in Rome, Italy. He’d bestowed well-wishes on… Audio World Celebrating a Long Life August 29, 2024 Maria Branyas was the world’s oldest person when she died on August 19 at 117 years of age. Her family shared the news on her social-media account. “She has gone the way she wanted,” they wrote. “In her sleep, at… Audio Science Dreaming of a Cure August 15, 2024 When Heman Bekele was 6, he got a chemistry set for Christmas. He used it to mix up “potions.” Back then, only his parents paid attention. Now, at 15, Heman is used to a lot more people watching his work.… Audio Spanish Community Showing Our Appreciation May 6, 2024 Everyone knows teachers who go above and beyond to help their students learn and grow. This year, Teacher Appreciation Week starts May 6. To celebrate, we asked TFK Kid Reporters to make a sign for a teacher who inspires them.… Audio World TIME Person of the Year December 21, 2023 Taylor Swift has been named TIME magazine’s 2023 Person of the Year. You can read more about her in The Taylor Effect and By the Numbers. TIME also acknowledged influential people in other categories. Soccer star Lionel Messi is TIME’s… Audio Posts pagination 1 2 3 4 5 &hellip; 18 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://github.com/signup?return_to=https%3A%2F%2Fgithub.com%2Fposativ%2Facrylamid&amp;source=login
Sign up for GitHub · GitHub Skip to content Already have an account? Sign in → You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} Create your free account Explore GitHub&#39;s core features for individuals and organizations. See what's included Access to GitHub Copilot Increase your productivity and accelerate software development. Unlimited repositories Collaborate securely on public and private projects. Integrated code reviews Boost code quality with built-in review tools. Automated workflows Save time with CI/CD integrations and GitHub Actions. Community support Connect with developers worldwide for instant feedback and insights. Sign up for GitHub --> Continue with Google --> Continue with Apple or --> GitHub requires JavaScript to proceed with the sign up process. Please enable JavaScript. Email * Password * Password should be at least 15 characters OR at least 8 characters including a number and a lowercase letter. Username * Username may only contain alphanumeric characters or single hyphens, and cannot begin or end with a hyphen. Your Country/Region * , required Select Country/Region Select Country/Region Sorry, something went wrong. Filter Loading Afghanistan Åland Albania Algeria American Samoa Andorra Angola Anguilla Antarctica Antigua and Barbuda Argentina Armenia Aruba Australia Austria Azerbaijan Bahamas Bahrain Bangladesh Barbados Belarus Belgium Belize Benin Bermuda Bhutan Bolivia Bonaire, Sint Eustatius and Saba Bosnia and Herzegovina Botswana Bouvet Island Brazil British Indian Ocean Territory Brunei Darussalam Bulgaria Burkina Faso Burundi Cambodia Cameroon Canada Cape Verde Cayman Islands Central African Republic Chad Chile China Christmas Island Cocos (Keeling) Islands Colombia Comoros Congo (Brazzaville) Congo (Kinshasa) Cook Islands Costa Rica Côte d&#39;Ivoire Croatia Curaçao Cyprus Czech Republic Denmark Djibouti Dominica Dominican Republic Ecuador Egypt El Salvador Equatorial Guinea Eritrea Estonia Ethiopia Falkland Islands Faroe Islands Fiji Finland France French Guiana French Polynesia French Southern Lands Gabon Gambia Georgia Germany Ghana Gibraltar Greece Greenland Grenada Guadeloupe Guam Guatemala Guernsey Guinea Guinea-Bissau Guyana Haiti Heard and McDonald Islands Honduras Hong Kong Hungary Iceland India Indonesia Iran Iraq Ireland Isle of Man Israel Italy Jamaica Japan Jersey Jordan Kazakhstan Kenya Kiribati Korea, South Kuwait Kyrgyzstan Laos Latvia Lebanon Lesotho Liberia Libya Liechtenstein Lithuania Luxembourg Macau Macedonia Madagascar Malawi Malaysia Maldives Mali Malta Marshall Islands Martinique Mauritania Mauritius Mayotte Mexico Micronesia Moldova Monaco Mongolia Montenegro Montserrat Morocco Mozambique Myanmar Namibia Nauru Nepal Netherlands New Caledonia New Zealand Nicaragua Niger Nigeria Niue Norfolk Island Northern Mariana Islands Norway Oman Pakistan Palau Palestine Panama Papua New Guinea Paraguay Peru Philippines Pitcairn Poland Portugal Puerto Rico Qatar Reunion Romania Russian Federation Rwanda Saint Barthélemy Saint Helena Saint Kitts and Nevis Saint Lucia Saint Martin (French part) Saint Pierre and Miquelon Saint Vincent and the Grenadines Samoa San Marino Sao Tome and Principe Saudi Arabia Senegal Serbia Seychelles Sierra Leone Singapore Sint Maarten (Dutch part) Slovakia Slovenia Solomon Islands Somalia South Africa South Georgia and South Sandwich Islands South Sudan Spain Sri Lanka Sudan Suriname Svalbard and Jan Mayen Islands Swaziland Sweden Switzerland Syria Taiwan Tajikistan Tanzania Thailand Timor-Leste Togo Tokelau Tonga Trinidad and Tobago Tunisia Türkiye Turkmenistan Turks and Caicos Islands Tuvalu Uganda Ukraine United Arab Emirates United Kingdom United States Minor Outlying Islands United States of America Uruguay Uzbekistan Vanuatu Vatican City Venezuela Vietnam Virgin Islands, British Virgin Islands, U.S. Wallis and Futuna Islands Western Sahara Yemen Zambia Zimbabwe No results found For compliance reasons, we&#39;re required to collect country information to send you occasional updates and announcements. Email preferences Receive occasional product updates and announcements Create account Verify your account Create account By creating an account, you agree to the Terms of Service . For more information about GitHub's privacy practices, see the GitHub Privacy Statement . We'll occasionally send you account-related emails. You can’t perform that action at this time.
2026-01-13T09:30:39
https://www.charterworks.com/
Charter - Future of Work, AI, Management, Hybrid Try Charter Pro for $1 Latest Topics AI DEI Flexible Work Management Societal Issues Resources Briefing Work Tech Research Playbooks Case Studies Toolkits, Scripts, and other Resources Solutions Charter Pro Charter Forum Charter Pro for Teams Advisory + Strategic Services Events Upcoming and Past The AI Download Charter Cortados Leading With AI Masterclass Skills Accelerators Strategy Briefings Webinars Workplace Summit About Try Charter Pro for $1 Sign In Charter is your guide to the future of work. Join 100k Charter newsletter subscribers who get our reporting and analysis of workplace trends, management tips, and more. Continue reading by subscribing to Charter’s newsletter about the future of work. If you’re already a Charter newsletter subscriber, this will only verify your email address. * indicates required First Name Last Name Title Level Select Individual Contributor Manager Director VP CXO Founder or Owner Other No Thanks Job Function Select People and HR Executive Leadership Marketing &amp; Comms Legal IT Sales Tech &amp; Product Other Learning &amp; Development Journalism Operations Finance Company Size Select 1-10 employees 11-50 employees 51-200 employees 201-500 employees 501-1,000 employees 1,001-5,000 employees 5,001-10,000 employees 10,001+ employees Self-employed Are you responsible for making strategic decisions about people at your organization? Yes No No Thanks Build a better future of work with Charter Pro Make the right decisions about flexibility, inclusion, AI, and other critical workplace questions with unlimited access to Charter’s research, reporting, and events. Upgrade to Charter Pro Access Charter's original reporting and research Featured AI DEI Flexible Work Leadership Societal Issues Pro Ideas uncertainty Three things leaders must do in 2026 to help manage uncertainty By Brian Elliott Pro Interview AI The higher-order skills AI can’t touch—yet By Jacob Clemente Pro Interview Layoffs How to preserve trust amid layoffs By Jena McGregor Pro Playbook AI Download - Leading Learning in the AI Era By Jacob Clemente, Kevin Delaney, and Michelle Peng Sponsored Content How AI-generated “workslop” quietly drains productivity—and how smarter AI use stops it By Pro Briefing AI How AI will reshape workers’ identity and professional pride By Kevin J. Delaney Pro AI vibe coding The buzz about Claude Code, and what it means By Jacob Clemente Pro Insights Leadership ‘Do the hardest work first:’ Harvey’s Katie Burke on moving from CHRO to COO By Jena McGregor Pro Insights AI Six expert AI and work predictions for 2026 By Jacob Clemente Pro Insights AI What we know about AI now we didn’t a year ago By Jacob Clemente Pro Book briefing DEI How to give feedback about racial bias By Michelle Peng Pro Steal this idea Inclusion Steal this idea: Lenovo’s ‘inclusion champions’ By Michelle Peng Pro Research Gender How to frame job rejections to better attract female candidates By Michelle Peng Pro Book briefing DEI Book Briefing: ‘The Meritocracy Paradox’ by Emilio J. Castilla By Michelle Peng Pro Book briefing DEI Book Briefing: ‘The Equity Edge’ by Jenn Tardy By Michelle Peng Pro Research Flexible work The hidden talent tax of policy changes By Brian Elliott Pro Steal this idea Working parents Steal this idea: How Liberty Mutual gives working parents the gift of time By Michelle Peng Pro Ideas Flexible work Why we can’t stop talking about remote work nearly six years later By Brian Elliott Pro Ideas Flexible work How mastering hybrid is an AI advantage By Brian Elliott Pro Briefing Flexible work How to approach the “agree-to-disagree equilibrium” of flexible work By Kevin J. Delaney Pro Insights Leadership ‘Do the hardest work first:’ Harvey’s Katie Burke on moving from CHRO to COO By Jena McGregor Pro Insights CHRO Five ways the chief people officer role evolved in 2025 By Jena McGregor Pro Book briefing Leadership Book briefing: ‘90 Days to Level Up Your Teamwork’ by Amy Edmondson By Michelle Peng Pro Briefing Events Our top takeaways from 2025 By Kevin J. Delaney Pro Insights Research spotlight The critical role of authenticity in preserving diversity, equity, and inclusion By Michelle Peng Pro Leadership Societal issues Steal this idea: Move the conversation to in-person By Kevin J. Delaney Pro Charter Workplace Summit Events Charter Workplace Summit 2024: Unlocking constructive communications in an anxious, polarized work climate By Cari Romm Nazeer Pro Charter Workplace Summit Events Charter Workplace Summit 2024: How to navigate politics and societal issues in the workplace in an election year By Cari Romm Nazeer Pro Cortado Events Charter Cortado: The 2024 election and the workplace By Michelle Peng Pro Case study Politics What people need for productive political conversations at work By Michelle Peng See All Latest By Kevin J. Delaney Co-founder and editor-in-chief How AI will reshape workers’ identity and professional pride The three layers of work that will matter in an age of AI. New ways employers are preparing for aging workers. One company’s “friendcare” program is paying workers to spend time with friends By Michelle Peng Senior reporter, work &amp; leadership Book Briefing: ‘The Healing Power of Resilience’ by Tara Narula Four ways to build resilience as a new year begins. By Jacob Clemente Senior reporter, AI &amp; work The buzz about Claude Code, and what it means What we found in our early testing of Claude Code. By Brian Elliott Executive-in-residence The hidden talent tax of policy changes Recent research suggests that state-level “trigger laws” could impact the size of your talent pool. By Jena McGregor Managing editor ‘Do the hardest work first:’ Harvey’s Katie Burke on moving from CHRO to COO Katie Burke on her expanded role at the legal AI company, navigating change management in AI adoption, and advice for people leaders looking to broaden their portfolios. By Massella Dukuly Head of workplace strategy Advisory Note: What to do when you can’t give bonuses How to motivate your team when budgets are tight and bonuses aren&#39;t on the table. By Erin Grau Co-founder and managing director Can women keep pace in AI without embracing hustle culture? Inside the conversation reshaping women&#39;s economic power in AI. See Latest Upcoming events Leading with AI NYC - Feb. 10, 2026 / SF - Feb. 24, 2026 For many organizations, the messy middle looks like greater adoption, experimentation, worker training, and more use cases, with little to show for it at the organization level. This year's Leading with AI Summit focuses on how to break out of the messy middle and pull ahead. Learn more → MORE UPCOMING AND PAST EVENTS Explore Charter Past events on-demand The New Employer Brand Summit: The New Talent Playbook Watch: Charter Cortado from Davos 2025 Charter Workplace Summit 2024: The view from the boardroom See recent Playbooks Download - Leading Learning in the AI Era Download - Charter Workplace Summit 2025: The playbook for winning at business, people, and AI Download - Leading in the age of AI: Practices for the new era See all Case Studies A case study on helping ‘the sandwich generation’ with elder care Case Study: ServiceNow leaders catalyze the reimagining of roles How Microsoft challenges its workers to build healthier habits See all Tech Reviews What testing shows about OpenAI’s deep research agent 4 ways to use ChatGPT&#x27;s data-analysis tool 13 tools to give you time back in your workday See all Book Briefings Book Briefing: ‘The Healing Power of Resilience’ by Tara Narula Book Briefing: ‘Job Architecture’ by Ben Zweig Book briefing: ‘90 Days to Level Up Your Teamwork’ by Amy Edmondson See all Tools and resources Download: AI norms one-pager Resource: Tracking and responding to the Trump administration’s policy changes Resource: Tracking and responding to the Trump administration’s policy changes See all Unlock more from Charter Charter Pro Membership Sharpen your expertise with Charter Pro’s original journalism, research, case studies, tools, templates, and exclusive events. Learn more Offsites & Leadership Development Custom presentations led by editor-in-chief Kevin Delaney focused on equipping leaders with the latest research, trends, best practices, and case studies to navigate AI, flexible work, and more. Contact us Workshops Group trainings focused on upskilling teams and increasing their impact. Contact us Advisory Services Address your highest priorities, answer your most pressing questions, and implement your most ambitious ideas with our guidance. Learn more Insights Books Interviews Charter on TIME Research Connect Events Topics Artificial Intelligence Hybrid Work DEI Leadership Charter Pro Become a Member Support Sign In Search Contact Partnerships General Inquiries Company About Careers Press Newsletters Charter Briefing Charter Works Inc. © 2025 Privacy Terms of Service
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/the-view/
TIME for Kids | The View | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit The View Community From Our Readers... September 23, 2021 I read “Game Champs” (September 10). I feel bad for the international teams that didn’t get to compete [in the 2021 Little League World Series]. But I’m glad that more U.S. teams got to play. —Emily Fay, 8, Stamford, Connecticut… Audio Community From Our Readers... April 9, 2021 You can write to us to share your thoughts and opinions on TIME for Kids articles. We recently received this letter from a reader in Connecticut. With regard to your article “Exploring Mars” (March 26), it was interesting and… Audio Community From Our Readers... March 4, 2021 We love hearing from our readers! Here's what a few of you have to say this week. I liked reading about Amanda Gorman (February 19). I think she and I could make the world a better place. Even though the… Audio Community From Our Readers... February 11, 2021 Mrs. Hall’s students at Durham Academy, in North Carolina, shared their feedback and ideas with us. Read what a few of them had to say. I loved the TFK article about rock, paper, scissors (January 15). I use rock, paper,… Audio Community From Our Readers... January 8, 2021 Write to us at tfkeditors@time.com to share your thoughts, ideas, and opinions on any topic. You might just see your words in a future issue. Throughout January, we’ll be sharing notes from kids about their heroes. Read some below. From… Audio Community From Our Readers... December 28, 2020 This month, we’ll be sharing notes from kids about their heroes. But we’re happy to hear from you on any topic! Write to us at tfkeditors@time.com to share your thoughts and ideas. You might see your name in a future… Audio Arts From Me to You July 2, 2020 When I was 9 years old, I received my first handmade card. It was from a classmate. He had just moved to New Jersey from South Korea. He had a hard time adjusting to his new school, and I helped… Audio Community News from Our Readers June 18, 2020 What are your thoughts, feelings, and opinions about the global coronavirus emergency? We asked our readers to let us know at tfkeditors@time.com, with the permission of a parent, teacher, or guardian. Here, we'll share some responses. Updated June 18 From… Audio Community News from Our Readers May 27, 2020 What are your thoughts, feelings, and opinions about the global coronavirus emergency? We asked our readers to let us know at tfkeditors@time.com, with the permission of a parent, teacher, or guardian. Here, we'll share some responses. Updated May 27 From… Audio Community News from Our Readers: April 2020 April 21, 2020 What are your thoughts, feelings, and opinions about the global coronavirus emergency? We asked our readers to let us know at tfkeditors@time.com, with the permission of a parent, teacher, or guardian. Here, we'll share some responses. Updated April 21 From… Audio Posts pagination 1 2 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://pages.awscloud.com/training/?nc2=h_mo
AWS Support and Customer Service Contact Info | Amazon Web Services Skip to main content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Support › Contact AWS Contact AWS General support for sales, compliance, and subscribers Want to speak with an AWS sales specialist? Get in touch Chat online or talk by phone Connect with support directly Monday through Friday Request form Request AWS sales support Submit a sales support form Compliance support Request support related to AWS compliance Connect with AWS compliance support Subscriber support services Technical support Support for service related technical issues. Unavailable under the Basic Support Plan. Sign in and submit request Account or billing support Assistance with account and billing related inquiries Sign in to request Wrongful charges support Received a bill for AWS, but don't have an AWS account? Learn more Support plans Learn about AWS support plan options See Premium Support options AWS sign-in resources See additional resources for issues related to logging into the console Help signing in to the console Need assistance to sign in to the AWS Management Console? View documentation Trouble shoot your sign-in issue Tried sign in, but the credentials didn’t work? Or don’t have the credentials to access AWS root user account? View solutions Help with multi-factor authentication (MFA) issues Lost or unusable Multi-Factor Authentication (MFA) device View solution Still unable to sign in to your AWS account? If you are still unable to log into your AWS account please fill out this form. View form Additional resources Self-service re:Post provides access to curated knowledge and a vibrant community that helps you become even more successful on AWS View AWS re:Post Service limit increases Need to increase to service limit? Fill out a quick request form Sign in to request Report abuse Report abusive activity from Amazon Web Services Resources Report suspected abuse Amazon.com support Request Kindle or Amazon.com support View on amazon.com Did you find what you were looking for today? Let us know so we can improve the quality of the content on our pages Yes No Create an AWS account Learn What Is AWS? What Is Cloud Computing? What Is Agentic AI? Cloud Computing Concepts Hub AWS Cloud Security What's New Blogs Press Releases Resources Getting Started Training AWS Trust Center AWS Solutions Library Architecture Center Product and Technical FAQs Analyst Reports AWS Partners Developers Builder Center SDKs &amp; Tools .NET on AWS Python on AWS Java on AWS PHP on AWS JavaScript on AWS Help Contact Us File a Support Ticket AWS re:Post Knowledge Center AWS Support Overview Get Expert Help AWS Accessibility Legal English Back to top Amazon is an Equal Opportunity Employer: Minority / Women / Disability / Veteran / Gender Identity / Sexual Orientation / Age. x facebook linkedin instagram twitch youtube podcasts email Privacy Site terms Cookie Preferences © 2026, Amazon Web Services, Inc. or its affiliates. All rights reserved.
2026-01-13T09:30:39
https://aws.amazon.com/blogs/networking-and-content-delivery/implementing-default-directory-indexes-in-amazon-s3-backed-amazon-cloudfront-origins-using-lambdaedge/
Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using Lambda@Edge | Networking &amp; Content Delivery Skip to Main Content Filter: All English Contact us AWS Marketplace Support My account Search Filter: All Sign in to console Create account AWS Blogs Home Blogs Editions Networking &amp; Content Delivery Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using Lambda@Edge by Ronnie Eichler on 18 OCT 2017 in Amazon CloudFront , Lambda@Edge , Networking &amp; Content Delivery Permalink Share Update: On May 3, 2021, we launched CloudFront Functions . With this launch, CloudFront Functions is now our recommended method for implementing default directory indexes for Amazon S3-backed Amazon CloudFront Origins. Please see the blog post Implementing Default Directory Indexes in Amazon S3-backed Amazon CloudFront Origins Using CloudFront Functions for our current recommended best practice. Introduction With the recent launch of Lambda@Edge , it’s now possible for you to provide even more robust functionality to your static websites. Amazon CloudFront is a content distribution network service. In this post, I show how you can use Lambda@Edge along with the CloudFront origin access identity (OAI) for Amazon S3 and still provide simple URLs (such as www.example.com/about/ instead of www.example.com/about/index.html). Background Amazon S3 is a great platform for hosting a static website . You don’t need to worry about managing servers or underlying infrastructure—you just publish your static to content to an S3 bucket. S3 provides a DNS name such as &lt;bucket-name&gt;.s3-website-&lt;AWS-region&gt;.amazonaws.com. Use this name for your website by creating a CNAME record in your domain’s DNS environment (or Amazon Route 53) as follows: www.example.com -&gt; &lt;bucket-name&gt;.s3-website-&lt;AWS-region&gt;.amazonaws.com You can also put CloudFront in front of S3 to further scale the performance of your site and cache the content closer to your users. CloudFront can enable HTTPS-hosted sites, by either using a custom Secure Sockets Layer (SSL) certificate or a managed certificate from AWS Certificate Manager . In addition, CloudFront also offers integration with AWS WAF , a web application firewall. As you can see, it’s possible to achieve some robust functionality by using S3, CloudFront, and other managed services and not have to worry about maintaining underlying infrastructure. One of the key concerns that you might have when implementing any type of WAF or CDN is that you want to force your users to go through the CDN. If you implement CloudFront in front of S3, you can achieve this by using an OAI . However, in order to do this, you cannot use the HTTP endpoint that is exposed by S3’s static website hosting feature. Instead, CloudFront must use the S3 REST endpoint to fetch content from your origin so that the request can be authenticated using the OAI. This presents some challenges in that the REST endpoint does not support redirection to a default index page. CloudFront does allow you to specify a default root object (index.html), but it only works on the root of the website (such as http://www.example.com &gt; http://www.example.com/index.html). It does not work on any subdirectory (such as http://www.example.com/about/). If you were to attempt to request this URL through CloudFront, CloudFront would do a S3 GetObject API call against a key that does not exist. Of course, it is a bad user experience to expect users to always type index.html at the end of every URL (or even know that it should be there). Until now, there has not been an easy way to provide these simpler URLs (equivalent to the DirectoryIndex Directive in an Apache Web Server configuration) to users through CloudFront. Not if you still want to be able to restrict access to the S3 origin using an OAI. However, with the release of Lambda@Edge , you can use a JavaScript function running on the CloudFront edge nodes to look for these patterns and request the appropriate object key from the S3 origin. Solution In this example, you use the compute power at the CloudFront edge to inspect the request as it’s coming in from the client. Then re-write the request so that CloudFront requests a default index object (index.html in this case) for any request URI that ends in ‘/’. When a request is made against a web server, the client specifies the object to obtain in the request. You can use this URI and apply a regular expression to it so that these URIs get resolved to a default index object before CloudFront requests the object from the origin. Use the following code: 'use strict'; exports.handler = (event, context, callback) =&gt; { // Extract the request from the CloudFront event that is sent to Lambda@Edge var request = event.Records[0].cf.request; // Extract the URI from the request var olduri = request.uri; // Match any '/' that occurs at the end of a URI. Replace it with a default index var newuri = olduri.replace(/\/$/, '\/index.html'); // Log the URI as received by CloudFront and the new URI to be used to fetch from origin console.log("Old URI: " + olduri); console.log("New URI: " + newuri); // Replace the received URI with the URI that includes the index page request.uri = newuri; // Return to CloudFront callback(null, request); }; To get started, create an S3 bucket to be the origin for CloudFront: On the other screens, you can just accept the defaults for the purposes of this walkthrough. If this were a production implementation, I would recommend enabling bucket logging and specifying an existing S3 bucket as the destination for access logs. These logs can be useful if you need to troubleshoot issues with your S3 access. Now, put some content into your S3 bucket. For this walkthrough, create two simple webpages to demonstrate the functionality: &nbsp;A page that resides at the website root, and another that is in a subdirectory. &lt;s3bucketname&gt;/index.html &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Root home page&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;Hello, this page resides in the root directory.&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; &lt;s3bucketname&gt;/subdirectory/index.html &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Subdirectory home page&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;Hello, this page resides in the /subdirectory/ directory.&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; When uploading the files into S3, you can accept the defaults. You add a bucket policy as part of the CloudFront distribution creation that allows CloudFront to access the S3 origin. You should now have an S3 bucket that looks like the following: Root of bucket Subdirectory in bucket Next, create a CloudFront distribution that your users will use to access the content. Open the CloudFront console, and choose Create Distribution. For Select a delivery method for your content, under Web, choose Get Started. On the next screen, you set up the distribution. Below are the options to configure: Origin Domain Name: &nbsp;Select the S3 bucket that you created earlier. Restrict Bucket Access:&nbsp;Choose&nbsp;Yes. Origin Access Identity:&nbsp;Create a new identity. Grant Read Permissions on Bucket: Choose&nbsp;Yes, Update Bucket Policy. Object Caching: Choose&nbsp;Customize (I am changing the behavior to avoid having CloudFront cache objects, as this could affect your ability to troubleshoot while implementing the Lambda code). Minimum TTL: 0 Maximum TTL: 0 Default TTL: 0 You can accept all of the other defaults. Again, this is a proof-of-concept exercise. After you are comfortable that the CloudFront distribution is working properly with the origin and Lambda code, you can re-visit the preceding values and make changes before implementing it in production. CloudFront distributions can take several minutes to deploy (because the changes have to propagate out to all of the edge locations). After that’s done, test the functionality of the S3-backed static website. Looking at the distribution, you can see that CloudFront assigns a domain name: Try to access the website using a combination of various URLs: http://&lt;domainname&gt;/: &nbsp;Works › curl -v http://d3gt20ea1hllb.cloudfront.net/ * Trying 54.192.192.214... * TCP_NODELAY set * Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0) &gt; GET / HTTP/1.1 &gt; Host: d3gt20ea1hllb.cloudfront.net &gt; User-Agent: curl/7.51.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; ETag: "cb7e2634fe66c1fd395cf868087dd3b9" &lt; Accept-Ranges: bytes &lt; Server: AmazonS3 &lt; X-Cache: Miss from cloudfront &lt; X-Amz-Cf-Id: -D2FSRwzfcwyKZKFZr6DqYFkIf4t7HdGw2MkUF5sE6YFDxRJgi0R1g== &lt; Content-Length: 209 &lt; Content-Type: text/html &lt; Last-Modified: Wed, 19 Jul 2017 19:21:16 GMT &lt; Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010) &lt; Connection: keep-alive &lt; &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Root home page&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;Hello, this page resides in the root directory.&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; * Curl_http_done: called premature == 0 * Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact This is because CloudFront is configured to request a default root object (index.html) from the origin. http://&lt;domainname&gt;/subdirectory/: &nbsp;Doesn’t work › curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/ * Trying 54.192.192.214... * TCP_NODELAY set * Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.214) port 80 (#0) &gt; GET /subdirectory/ HTTP/1.1 &gt; Host: d3gt20ea1hllb.cloudfront.net &gt; User-Agent: curl/7.51.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; ETag: "d41d8cd98f00b204e9800998ecf8427e" &lt; x-amz-server-side-encryption: AES256 &lt; Accept-Ranges: bytes &lt; Server: AmazonS3 &lt; X-Cache: Miss from cloudfront &lt; X-Amz-Cf-Id: Iqf0Gy8hJLiW-9tOAdSFPkL7vCWBrgm3-1ly5tBeY_izU82ftipodA== &lt; Content-Length: 0 &lt; Content-Type: application/x-directory &lt; Last-Modified: Wed, 19 Jul 2017 19:21:24 GMT &lt; Via: 1.1 6419ba8f3bd94b651d416054d9416f1e.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010) &lt; Connection: keep-alive &lt; * Curl_http_done: called premature == 0 * Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact If you use a tool such like cURL to test this, you notice that CloudFront and S3 are returning a blank response. The reason for this is that the subdirectory does exist, but it does not resolve to an S3 object. Keep in mind that S3 is an object store, so there are no real directories. User interfaces such as the S3 console present a hierarchical view of a bucket with folders based on the presence of forward slashes, but behind the scenes the bucket is just a collection of keys that represent stored objects. http://&lt;domainname&gt;/subdirectory/index.html: &nbsp;Works › curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/index.html * Trying 54.192.192.130... * TCP_NODELAY set * Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.130) port 80 (#0) &gt; GET /subdirectory/index.html HTTP/1.1 &gt; Host: d3gt20ea1hllb.cloudfront.net &gt; User-Agent: curl/7.51.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; Date: Thu, 20 Jul 2017 20:35:15 GMT &lt; ETag: "ddf87c487acf7cef9d50418f0f8f8dae" &lt; Accept-Ranges: bytes &lt; Server: AmazonS3 &lt; X-Cache: RefreshHit from cloudfront &lt; X-Amz-Cf-Id: bkh6opXdpw8pUomqG3Qr3UcjnZL8axxOH82Lh0OOcx48uJKc_Dc3Cg== &lt; Content-Length: 227 &lt; Content-Type: text/html &lt; Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT &lt; Via: 1.1 3f2788d309d30f41de96da6f931d4ede.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010) &lt; Connection: keep-alive &lt; &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Subdirectory home page&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;Hello, this page resides in the /subdirectory/ directory.&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; * Curl_http_done: called premature == 0 * Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact This request works as expected because you are referencing the object directly. Now, you implement the Lambda@Edge function to return the default index.html page for any subdirectory. Looking at the example JavaScript code, here’s where the magic happens: var newuri = olduri.replace(/\/$/, '\/index.html'); You are going to use a JavaScript regular expression to match any ‘/’ that occurs at the end of the URI and replace it with ‘/index.html’. This is the equivalent to what S3 does on its own with static website hosting. However, as I mentioned earlier, you can’t rely on this if you want to use a policy on the bucket to restrict it so that users must access the bucket through CloudFront. That way, all requests to the S3 bucket must be authenticated using the S3 REST API. Because of this, you implement a Lambda@Edge function that takes any client request ending in ‘/’ and append a default ‘index.html’ to the request before requesting the object from the origin. In the Lambda console, choose Create function. On the next screen, skip the blueprint selection and choose Author from scratch, as you’ll use the sample code provided. Next, configure the trigger. Choosing the empty box shows a list of available triggers. Choose CloudFront and select your CloudFront distribution ID (created earlier). For this example, leave Cache Behavior as * and CloudFront Event as Origin Request. Select the Enable trigger and replicate box and choose Next. Next, give the function a name and a description. Then, copy and paste the following code: 'use strict'; exports.handler = (event, context, callback) =&gt; { // Extract the request from the CloudFront event that is sent to Lambda@Edge var request = event.Records[0].cf.request; // Extract the URI from the request var olduri = request.uri; // Match any '/' that occurs at the end of a URI. Replace it with a default index var newuri = olduri.replace(/\/$/, '\/index.html'); // Log the URI as received by CloudFront and the new URI to be used to fetch from origin console.log("Old URI: " + olduri); console.log("New URI: " + newuri); // Replace the received URI with the URI that includes the index page request.uri = newuri; // Return to CloudFront callback(null, request); }; Next, define a role that grants permissions to the Lambda function. For this example, choose Create new role from template, Basic Edge Lambda permissions. This creates a new IAM role for the Lambda function and grants the following permissions: { "Version": "2012-10-17", "Statement": [ { "Effect": "Allow", "Action": [ "logs:CreateLogGroup", "logs:CreateLogStream", "logs:PutLogEvents" ], "Resource": [ "arn:aws:logs:*:*:*" ] } ] } In a nutshell, these are the permissions that the function needs to create the necessary CloudWatch log group and log stream, and to put the log events so that the function is able to write logs when it executes. After the function has been created, you can go back to the browser (or cURL) and re-run the test for the subdirectory request that failed previously: › curl -v http://d3gt20ea1hllb.cloudfront.net/subdirectory/ * Trying 54.192.192.202... * TCP_NODELAY set * Connected to d3gt20ea1hllb.cloudfront.net (54.192.192.202) port 80 (#0) &gt; GET /subdirectory/ HTTP/1.1 &gt; Host: d3gt20ea1hllb.cloudfront.net &gt; User-Agent: curl/7.51.0 &gt; Accept: */* &gt; &lt; HTTP/1.1 200 OK &lt; Date: Thu, 20 Jul 2017 21:18:44 GMT &lt; ETag: "ddf87c487acf7cef9d50418f0f8f8dae" &lt; Accept-Ranges: bytes &lt; Server: AmazonS3 &lt; X-Cache: Miss from cloudfront &lt; X-Amz-Cf-Id: rwFN7yHE70bT9xckBpceTsAPcmaadqWB9omPBv2P6WkIfQqdjTk_4w== &lt; Content-Length: 227 &lt; Content-Type: text/html &lt; Last-Modified: Wed, 19 Jul 2017 19:21:45 GMT &lt; Via: 1.1 3572de112011f1b625bb77410b0c5cca.cloudfront.net (CloudFront), 1.1 iad6-proxy-3.amazon.com:80 (Cisco-WSA/9.1.2-010) &lt; Connection: keep-alive &lt; &lt;!doctype html&gt; &lt;html&gt; &lt;head&gt; &lt;meta charset="utf-8"&gt; &lt;title&gt;Subdirectory home page&lt;/title&gt; &lt;/head&gt; &lt;body&gt; &lt;p&gt;Hello, this page resides in the /subdirectory/ directory.&lt;/p&gt; &lt;/body&gt; &lt;/html&gt; * Curl_http_done: called premature == 0 * Connection #0 to host d3gt20ea1hllb.cloudfront.net left intact You have now configured a way for CloudFront to return a default index page for subdirectories in S3! Summary In this post, you used Lambda@Edge to be able to use CloudFront with an S3 origin access identity and serve a default root object on subdirectory URLs. To find out some more about this use-case, see Lambda@Edge integration with CloudFront &nbsp;in&nbsp;our documentation. If you have questions or suggestions, feel free to comment below. For troubleshooting or implementation help, check out the Lambda forum . TAGS: Amazon CloudFront , CDN , Content Delivery Network , Lambda@Edge , Networking &amp; Content Delivery , Uncategorized Resources Networking Products Getting Started Amazon CloudFront Follow &nbsp;Twitter &nbsp;Facebook &nbsp;LinkedIn &nbsp;Twitch &nbsp;Email Updates @charset "UTF-8";[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4{position:relative;transition:box-shadow .3s ease}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover{box-shadow:var(--rg-shadow-gray-elevation-1, 1px 1px 20px rgba(0, 0, 0, .1))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003.rgft_e79955da,[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover.rgft_e79955da{box-shadow:var(--rg-shadow-gray-elevation-2, 1px 1px 24px rgba(0, 0, 0, .25))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1b2a14d4:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover{box-shadow:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde{position:relative;transform-style:preserve-3d;overflow:unset!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:before{content:"";position:absolute;inset:0;border-radius:inherit;transform:translateZ(-1px);pointer-events:none;transition-property:filter,inset;transition-duration:.3s;transition-timing-function:ease;background-clip:content-box!important;padding:1px}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_3d631df0.rgft_4df65418:hover:before{filter:blur(20px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_5962fadc:hover:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:before{filter:blur(15px)}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_b27cc003:hover:before{filter:none}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_e90ac70d:active:before{filter:blur(8px)!important}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_a4f580d2:before{filter:blur(8px)!important}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #fa6f00 0%, #e433ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #0099ff 0%, #5c7fff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #ff1ae0 0%, #ff386a 50%, #fa6f00 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #00bd6b 0%, #00a4bd 50%, #0099ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #00bd6b 0%, #0099ff 50%, #8575ff 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #ad5cff 0%, #0099ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=light][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #ff1ae0 0%, #8575ff 50%, #00a4bd 100%))}[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a).rgft_38d8ffac:before,[data-eb-6a8f3296] [data-rg-mode=dark][data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde.rgft_38d8ffac:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:linear-gradient(123deg,#d14600,#c300e0,#6842ff)}[data-eb-6a8f3296] [data-rg-theme=fuchsia] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=fuchsia].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-fuchsia, linear-gradient(123deg, #d14600 0%, #c300e0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=indigo] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=indigo].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-indigo, linear-gradient(123deg, #006ce0 0%, #295eff 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=orange] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=orange].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-orange, linear-gradient(123deg, #d600ba 0%, #eb003b 50%, #d14600 100%))}[data-eb-6a8f3296] [data-rg-theme=teal] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=teal].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-teal, linear-gradient(123deg, #008559 0%, #007e94 50%, #006ce0 100%))}[data-eb-6a8f3296] [data-rg-theme=blue] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=blue].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-blue, linear-gradient(123deg, #008559 0%, #006ce0 50%, #6842ff 100%))}[data-eb-6a8f3296] [data-rg-theme=violet] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=violet].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-violet, linear-gradient(123deg, #962eff 0%, #006ce0 50%, #007e94 100%))}[data-eb-6a8f3296] [data-rg-theme=purple] .rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before,[data-eb-6a8f3296] [data-rg-theme=purple].rgft_9e423fbb.rgft_1ed8cbde:not(:disabled,.rgft_3ef5a62a):before{background:var(--rg-shadow-gradient-purple, linear-gradient(123deg, #d600ba 0%, #6842ff 50%, #007e94 100%))}[data-eb-6a8f3296] a.rgft_f7822e54,[data-eb-6a8f3296] button.rgft_f7822e54{--button-size: 44px;--button-pad-h: 24px;--button-pad-borderless-h: 26px;border:2px solid var(--rg-color-background-page-inverted, #0F141A);padding:8px var(--button-pad-h, 24px);border-radius:40px!important;align-items:center;justify-content:center;display:inline-flex;height:var(--button-size, 44px);text-decoration:none!important;text-wrap:nowrap;cursor:pointer;position:relative;transition:all .3s ease}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_094d67e1,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_094d67e1{--button-size: 32px;--button-pad-h: 14px;--button-pad-borderless-h: 16px}[data-eb-6a8f3296] a.rgft_f7822e54>span,[data-eb-6a8f3296] button.rgft_f7822e54>span{color:var(--btn-text-color, inherit)!important}[data-eb-6a8f3296] a.rgft_f7822e54:focus-visible,[data-eb-6a8f3296] button.rgft_f7822e54:focus-visible{outline:2px solid var(--rg-color-focus-ring, #006CE0)!important;outline-offset:4px!important;transition:outline 0s}[data-eb-6a8f3296] a.rgft_f7822e54:focus:not(:focus-visible),[data-eb-6a8f3296] button.rgft_f7822e54:focus:not(:focus-visible){outline:none!important}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_303c672b,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_303c672b{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-primary-bg, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_18409398{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border-color:var(--rg-color-background-page-inverted, #0F141A)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_090951dc{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-background-object, #F3F3F7);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--btn-text-color: var(--rg-color-text-utility, #161D26);border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb950a4e{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e,[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_bb950a4e{background-image:var(--rg-gradient-a, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678{--btn-text-color: var(--rg-color-text-utility-inverted, #FFFFFF);background-color:var(--rg-color-btn-visited-bg, #656871);border-color:var(--rg-color-btn-visited-bg, #656871)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_bb419678.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_bb419678.rgft_15529d9f{--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-visited-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5{--btn-text-color: var(--rg-color-btn-disabled-text, #B4B4BB);background-color:var(--rg-color-btn-disabled-bg, #F3F3F7);border-color:var(--rg-color-btn-disabled-bg, #F3F3F7);cursor:default}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_18409398,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_15529d9f{border:none;padding:10px var(--button-pad-borderless-h, 24px)}[data-eb-6a8f3296] a.rgft_f7822e54.rgft_badebaf5.rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54.rgft_badebaf5.rgft_090951dc{--btn-text-color: var(--rg-color-btn-tertiary-disabled-text, #B4B4BB);background-color:#0000}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_18409398:not(.rgft_bb950a4e),[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_15529d9f:not(.rgft_bb950a4e){--btn-text-color: var(--rg-color-text-utility, #161D26);background-color:var(--rg-color-btn-secondary-bg, #FFFFFF)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad80,#f8c7ff80 37.79%,#d2ccff80 75.81%,#c2d1ff80)}[data-eb-6a8f3296] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #f8c7ff 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:hover:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:hover:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-50, linear-gradient(120deg, #78008a 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{box-shadow:none}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{background-image:linear-gradient(97deg,#ffc0ad,#f8c7ff 37.79%,#d2ccff 75.81%,#c2d1ff)}[data-eb-6a8f3296] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc{--rg-gradient-angle:97deg;background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(248, 199, 255, .5) 20.08%, #d2ccff 75.81%))}[data-eb-6a8f3296] [data-rg-mode=dark] a.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] [data-rg-mode=dark] button.rgft_f7822e54:active:not(.rgft_badebaf5).rgft_090951dc,[data-eb-6a8f3296] a[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5),[data-eb-6a8f3296] button[data-rg-mode=dark].rgft_f7822e54.rgft_090951dc:active:not(.rgft_badebaf5){background-image:var(--rg-gradient-a-pressed, linear-gradient(120deg, rgba(120, 0, 138, .5) 24.25%, #b2008f 69.56%))}[data-eb-6a8f3296] .rgft_8711ccd9{-webkit-font-smoothing:antialiased;-moz-osx-font-smoothing:grayscale;background:#0000;border:none;margin:0}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_5e58a6df{text-align:center}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_b7ada98b{display:block}[data-eb-6a8f3296] .rgft_8711ccd9.rgft_beb26dc7{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace}[data-eb-6a8f3296] .rgft_8711ccd9 a{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_8711ccd9 a:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_8711ccd9 a:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_d72bdead .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_8711ccd9 a:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_d72bdead{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_8711ccd9 b,[data-eb-6a8f3296] b.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 strong,[data-eb-6a8f3296] strong.rgft_8711ccd9{font-weight:700}[data-eb-6a8f3296] i.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 i,[data-eb-6a8f3296] em.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 em{font-style:italic}[data-eb-6a8f3296] u.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 u{text-decoration:underline}[data-eb-6a8f3296] code.rgft_8711ccd9,[data-eb-6a8f3296] .rgft_8711ccd9 code{font-family:Amazon Ember Mono,Consolas,Andale Mono WT,Andale Mono,Lucida Console,Lucida Sans Typewriter,DejaVu Sans Mono,Bitstream Vera Sans Mono,Liberation Mono,Nimbus Mono L,Monaco,Courier New,Courier,monospace;border-radius:4px;border:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);color:var(--rg-color-text-secondary, #232B37);padding-top:var(--rg-padding-8);padding-right:var(--rg-padding-8);padding-bottom:var(--rg-padding-8);padding-left:var(--rg-padding-8)}[data-eb-6a8f3296] .rgft_12e1c6fa{display:inline!important;vertical-align:middle}[data-eb-6a8f3296] .rgft_8711ccd9 p img{aspect-ratio:16/9;height:100%;object-fit:cover;width:100%;border-radius:8px;order:1;margin-bottom:var(--rg-margin-4)}[data-eb-6a8f3296] .rgft_8711ccd9 table{table-layout:fixed;border-spacing:0;width:100%}[data-eb-6a8f3296] .rgft_8711ccd9 table td{font-size:14px;border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6)}[data-eb-6a8f3296] .rgft_8711ccd9 table td:first-of-type{border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:first-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:first-of-type{border-top-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table thead tr:first-of-type>*:last-of-type,[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead)) tr:first-of-type>*:last-of-type{border-top-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:first-of-type{border-bottom-left-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table tr:last-of-type td:last-of-type{border-bottom-right-radius:16px}[data-eb-6a8f3296] .rgft_8711ccd9 table:not(:has(thead),:has(th)) tr:first-of-type td{border-top:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th{color:var(--rg-color-text-primary-inverted, #FFFFFF);min-width:280px;max-width:400px;padding:0;text-align:left;vertical-align:top;background-color:var(--rg-color-background-object-inverted, #232B37);border-left:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);border-bottom:1px solid var(--rg-color-border-lowcontrast, #CCCCD1);padding-top:var(--rg-padding-6);padding-right:var(--rg-padding-6);padding-bottom:var(--rg-padding-6);padding-left:var(--rg-padding-6);row-gap:var(--rg-margin-5);column-gap:var(--rg-margin-5);max-width:100%;min-width:150px}@media (min-width: 480px) and (max-width: 767px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:100%;min-width:150px}}@media (min-width: 768px) and (max-width: 1023px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:240px;min-width:180px}}@media (min-width: 1024px) and (max-width: 1279px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:350px;min-width:240px}}@media (min-width: 1280px) and (max-width: 1599px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}@media (min-width: 1600px){[data-eb-6a8f3296] .rgft_8711ccd9 table th{max-width:400px;min-width:280px}}[data-eb-6a8f3296] .rgft_8711ccd9 table th:first-of-type{border-top-left-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:nth-of-type(n+3){border-left:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_8711ccd9 table th:last-of-type{border-top-right-radius:16px;border-top:0 solid var(--rg-color-border-lowcontrast, #CCCCD1);border-right:0 solid var(--rg-color-border-lowcontrast, #CCCCD1)}[data-eb-6a8f3296] .rgft_a1b66739{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26);--icon-color: currentcolor}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_a1b66739.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_a1b66739 svg{fill:none;stroke:none}[data-eb-6a8f3296] .rgft_a1b66739 path[data-fill]:not([fill]){fill:var(--icon-color)}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]{stroke-width:2}[data-eb-6a8f3296] .rgft_a1b66739 path[data-stroke]:not([stroke]){stroke:var(--icon-color)}[data-eb-6a8f3296] .rgft_3ed66ff4{display:inline-flex;flex-direction:column;align-items:center;justify-content:center;color:var(--rg-color-text-primary, #161D26)}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_9124b200{height:10px;width:10px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bc1a8743{height:16px;width:16px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_c0cbb35d{height:20px;width:20px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_bd40fe12{height:32px;width:32px}[data-eb-6a8f3296] .rgft_3ed66ff4.rgft_27320e58{height:48px;width:48px}[data-eb-6a8f3296] .rgft_98b54368{color:var(--rg-color-text-body, #232B37)}[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_275611e5{font-size:calc(1rem * var(--font-size-multiplier, 1.6));line-height:1.5;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_275611e5{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_275611e5{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_275611e5{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_275611e5{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_275611e5{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_007aef8b{font-size:calc(.875rem * var(--font-size-multiplier, 1.6));line-height:1.429;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_007aef8b{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_007aef8b{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_007aef8b{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_98b54368.rgft_ff19c5f9{font-size:calc(.75rem * var(--font-size-multiplier, 1.6));line-height:1.333;font-weight:400}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_98b54368.rgft_ff19c5f9{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_98b54368.rgft_ff19c5f9{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_98b54368.rgft_ff19c5f9{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_98b54368 ul{list-style-type:disc;margin-top:2rem}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee{display:inline;position:relative;cursor:pointer;text-decoration:none!important;color:var(--rg-color-link-default, #006CE0);background:linear-gradient(to right,currentcolor,currentcolor);background-size:100% .1em;background-position:0 100%;background-repeat:no-repeat}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:focus-visible{color:var(--rg-color-link-focus, #006CE0)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:hover{color:var(--rg-color-link-hover, #003B8F);animation:rgft_9beb7cc5 .3s cubic-bezier(0,0,.2,1)}[data-eb-6a8f3296] .rgft_98b54368.rgft_2a7f98ee:visited{color:var(--rg-color-link-visited, #6842FF)}@keyframes rgft_9beb7cc5{0%{background-size:0 .1em}to{background-size:100% .1em}}[data-eb-6a8f3296] .rgft_d835af5c{color:var(--rg-color-text-title, #161D26)}[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(4.5rem * var(--font-size-multiplier, 1.6));line-height:1.111;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_3e9243e1{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_3e9243e1{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_3e9243e1{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_3e9243e1{font-family:NotoSansTC,Helvetica,Arial,Microsoft Yahei,\5fae\8f6f\96c5\9ed1,STXihei,\534e\6587\7ec6\9ed1,sans-serif}[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3.75rem * var(--font-size-multiplier, 1.6));line-height:1.133;font-weight:500;font-family:Amazon Ember Display,Amazon Ember,Helvetica Neue,Helvetica,Arial,sans-serif}@media (min-width: 481px) and (max-width: 768px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(3rem * var(--font-size-multiplier, 1.6));line-height:1.167;font-weight:500}}@media (max-width: 480px){[data-eb-6a8f3296] .rgft_d835af5c.rgft_54816d41{font-size:calc(2.5rem * var(--font-size-multiplier, 1.6));line-height:1.2;font-weight:500}}[data-eb-6a8f3296] [data-rg-lang=ar] .rgft_d835af5c.rgft_54816d41{font-family:AmazonEmberArabic,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ja] .rgft_d835af5c.rgft_54816d41{font-family:ShinGo,\30d2\30e9\30ae\30ce\89d2\30b4 Pro W3,Hiragino Kaku Gothic Pro,Osaka,\30e1\30a4\30ea\30aa,Meiryo,\ff2d\ff33 \ff30\30b4\30b7\30c3\30af,MS PGothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=ko] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansKR,Malgun Gothic,sans-serif}[data-eb-6a8f3296] [data-rg-lang=th] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansThai,Helvetica,Arial,sans-serif}[data-eb-6a8f3296] [data-rg-lang=zh] .rgft_d835af5c.rgft_54816d41{font-family:NotoSansTC,Helvetica,Arial,Microsof
2026-01-13T09:30:39
https://www.charterworks.com/privacy-policy/#/portal/signin
Privacy Policy Try Charter Pro for $1 Latest Topics AI DEI Flexible Work Management Societal Issues Resources Briefing Work Tech Research Playbooks Case Studies Toolkits, Scripts, and other Resources Solutions Charter Pro Charter Forum Charter Pro for Teams Advisory + Strategic Services Events Upcoming and Past The AI Download Charter Cortados Leading With AI Masterclass Skills Accelerators Strategy Briefings Webinars Workplace Summit About Try Charter Pro for $1 Sign In Privacy Policy Last Updated: December 14, 2021 1. WHAT CHARTER COLLECTS 1 2. HOW CHARTER USES YOUR INFORMATION 3 3. HOW CHARTER SHARES INFORMATION 5 4. THIRD PARTY COOKIES AND TRACKING TECHNOLOGIES 7 5. YOUR CHOICES 8 6. FOR READERS OUTSIDE THE US 9 7. OTHER INFORMATION 9 Charter Works, Inc. and its affiliates (“Charter Works,” “Charter,” “we,” “us,” or “our”) operate CharterWorks.com, deliver newsletters, host live events, and deliver other products and services (“Services”). This Privacy Policy describes the kinds of information Charter may gather when you use the Services, how Charter uses that information, when Charter might disclose that information, and how you can manage it. By using the Services, you are accepting the practices described in our Privacy Policy, including our use of cookies and similar online tools. If you do not agree to the terms of this Privacy Policy, please do not use the Services. We reserve the right to modify or amend the terms of our privacy policy from time to time without notice. Your continued use of our Services following the posting of changes to these terms will mean you accept those changes. Please note: our Services are under constant development. This Privacy Policy may therefore be modified and updated on an ongoing basis. Please check back to this page regularly. Our Privacy Policy does not govern or apply to information collected or used by Charter Works through other means or to websites maintained by other companies or organizations to which we may link or who may link to us. Please send any questions about privacy issues to privacy@charterworks.com. WHAT CHARTER COLLECTS The information we collect and the purposes for which we use it will depend on how you interact with Charter Works and the Services. Information You Provide to Us When you use the Services, you may provide us the following: Registration, Subscription or Contact Information such as e-mail address, name, phone number, shipping address, and billing information Demographic and interest information such as your age, date of birth, gender, interests, lifestyle information, and hobbies Financial and transactional information such as credit or debit card number, verification number, and expiration date, to process payments and information about your transactions and purchases with us. Please note: payment information goes to our payment processors and is not collected, processed, or stored by Charter Works. Customer service information such as questions and other messages you address to us directly through online forms, by email, over the phone, or by post, and summaries or voice recordings of your interactions with customer care Employment or Education Information such as education history, employment experience, business contact information User-generated content such as comments on articles, photos, videos, audio, any information you submit in public forums or message boards, reviews and feedback or testimonials you provide about our Services Marketing information such as information related to your preferences for receiving communications, subscribing to our publications, newsletters, and other content Survey, market research or sweepstakes information such as information gathered when you complete a survey, participate in market research, or enter a contest, sweepstakes, or game relating to the Services. Social media information if you link your account or access the Services through a third-party connection or log-in, we may have access to any information you provide to that social network depending on your privacy settings Other information any other information you choose to directly provide to us in connection with your use of the Services Information We Automatically Collect We may collect information about your use of the Services, including: Device information and identifiers : such as computer or mobile device model, IP address, other unique device identifiers, operating system version, browser type, language, and settings Usage information : such as information about the Services you use, the time, date, and duration of your use of the Services, newsletter open-rate, referral information, your interaction with content offered through the Services, search terms used, referring website, and software crash reports. We also collect information stored using cookies, mobile ad identifiers, and similar technologies set on your device. Our servers may automatically keep an activity log of your use of the Services. We may collect such usage information at the individual or aggregate level. Please see Section Four of this Privacy Policy (THIRD PARTY COOKIES AND TRACKING TECHNOLOGIES ) for more information about how we collect and use this information. Location information : such as city, state and ZIP code associated with your IP address and precise geolocation information from your devices, with your permission in accordance with your mobile device settings. Information We Receive from Third Parties We may receive information about you from third parties and combine it with information we receive from or about you, including: Information from social media networks When you interact with Charter Works a social media service or log in using social media credentials, depending on your social media settings, we may have access to your information from that social network such as your social media account ID and/or user name associated with that social media service, your profile picture, email address, friends list or information about the people and groups you are connected to and how you interact with them, and any information you have made public in connection with that social media service, Information from third party email and subscription providers and/or processors When you purchase one of our subscription products on a third-party services or stores subscription (including the Apple Store), we receive personal information from the third parties that help us process emails and subscriptions. Please note: Charter Works does not receive (or collect, process, or store) any payment card industry (“PCI”) data. Information from publicly or commercially available sources We may collect information from third parties such as consumer data resellers that make available information, collected both online and offline, such as demographic information, additional contact information, group affiliations, occupational information, and educational background, which we may combine with other information we receive from or about you. Other Information We Collect Charter also may collect other information about you, your device, or your use of the Services in ways that we describe to you at the point of collection, or otherwise, with your consent. You may choose not to provide us with certain types of information but doing so may affect your experience in using the Services. HOW CHARTER USES YOUR INFORMATION Charter uses your information to personalize and improve your experience using the Services in the ways described below, or in other ways at your direction or with your consent. To Provide the Services For example, to: Process and fulfill your transactions , including subscriptions or memberships, and enable you to login to the Services, Contact you and send you communications about the Services (including communications you request like newsletters) and share invitations to events or offers about Charter products or our third-party partners’ products Respond to you and your comments, inquiries, or requests; and transmit legal notices, policy updates, and other important information about the Services, Provide features of the Services (such as social sharing and comments) and to post content you submit, Provide customer support , administer loyalty programs, contests, promotions, or surveys, or Identify and repair errors that impair the function of the Services and to detect security incidents Protect the rights of Charter and others, detect, investigate, and prevent activities that may violate our policies or may be fraudulent, illegal; to protect, enforce, or defend the legal rights, privacy, safety, or property of Charter Works, its employees, agents, or users; or as required by law. Please note: all editorial and commercial email messages include instructions for unsubscribing from such future communications. To Deliver Personalized Content and Recommendations For example, to: Customize features of the Services, Deliver relevant content and to provide you with an enhanced experience based on your activities and interests Send you personalized newsletters , surveys, and information about products, services and promotions offered by us, our partners, and other organizations with which we work Facilitate the delivery of targeted advertising (including interest-based advertising), promotions, and offers, on behalf of ourselves and our third-party advertisers, both on our websites and elsewhere Customize content that our third-party partners deliver on the Services (e.g., personalized third-party advertising) based on your activity on the Services Create and update inferences and profiles about you that can be used for advertising and marketing on the Services, third party services and platforms, and mobile apps, or for analytics, Measure and report on the delivery of advertisements Please note: all editorial and commercial email messages include instructions for unsubscribing from such future communications. To enable us to provide these Services, we may use the information we collect to identify you across sessions, browsers, and/or devices. Please see Section Four of this Privacy Policy (THIRD PARTY COOKIES AND TRACKING TECHNOLOGIES ) for further information about our and third parties’ use of cookies and other tracking technologies and your choices related to targeted advertising. To Learn About Our Users and Improve Services We conduct analysis and research on our users’ demographics, interests, and behavior and perform statistical analysis of our users, their use of the Services, and their purchasing patterns. We do this to optimize and improve the Services, our products, and our operations. To Combine Information for All the Purposes Described Above We may use the information gathered from one aspect of the Services to enhance other aspects of Services and we may combine information gathered from multiple aspects of the Services into a single user record. We also may use or combine information that we collect offline or that we collect or receive from third-party sources for many reasons, including to enhance, expand, and check the accuracy of our records. Data collected from a particular computer, browser or device may be used with another computer, browser or device that is linked to the computer, browser, or device on which such data was collected. HOW CHARTER SHARES INFORMATION Charter’s information-sharing practices vary based on the type of information and the type of recipient. Aggregate Or De-Identified Information We may use and share deidentified information with third parties in any manner for any purposes. Subscription Providers If your subscription is provided in whole or in part by your employer or other third party, we may share with them information about your access and use of your subscription. If you have a subscription associated with a professor or school, we may notify your professor or school to confirm your subscription, access, or use. When providing information to a subscription provider, we may reveal limited amounts of your personal information such as your name or email address. Service Providers and Professional Advisors We share information with third party agents and vendors who perform functions on our behalf, including, but not limited to, web hosting, content syndication, content management, social media integration, marketing, analytics, product development, email or text message transmission, billing or payment processing, order fulfillment, auditing, and customer service. We also may disclose your personal information to professional advisors, such as lawyers, bankers, auditors, and insurers, where necessary in the course of the professional services that they render to us. Service providers and professional advisors with whom we share information will be obligated to maintain the confidentiality (as appropriate for the services) and security of personal information Charter transmits to them. Third Party Content and/or Advertising Partners Third parties that provide content, advertising, or functionality to the Services may collect or receive information about you and/or your use of the Services, using cookies, beacons, and similar technologies. Third party content and/or advertising partners may use such information to provide you with advertising that is based on your interests and to measure and analyze ad performance on our Services or other websites or platforms, and combine it with information collected across different websites, online services, and other devices. Please note: third parties’ use of your information will be based on their own privacy policies. Social Media Platforms and Services If you log in with or connect a social media service account to a Charter Works Service, certain information may be available to the social media platform, in which case the social media platform’s use of the shared information will be governed by the social media platform’s privacy policy and your privacy settings for that platform. If you do not want your personal information shared as described, please do not connect your social media platform account with your Charter Works account, and do not participate in social sharing on the Services. Providers of Co-Branded Services Charter may offer co-branded services or features, such as conferences, events, contests, sweepstakes, or other promotions together with a third party (“Co-Branded Services”). Co-Branded Services may be hosted by Charter Works or through the third party’s services. Charter may share the information you submit in connection with the Co-Branded Service with the applicable third party or the third party may receive certain information from you at the same time Charter does. Please note: a third party’s use of your information will be governed by the third party’s privacy policy. Other Users of The Services Any information (including your name, location, email address, profile information, and comments) you choose to submit through the use certain features of the Services that provide an opportunity to interact with Charter and other Charter users (e.g., community forums, Slack groups) may be publicly available. Charter is not responsible for any information you choose to submit and make public through these channels of communication. Business Transferees In the event of a corporate change in control (for instance, a sale or merger) or due diligence in contemplation thereof, Charter Works may transfer your personal information to the new party in control or the party acquiring assets. With Your Consent at Your Direction We may also share your information with your consent. The Services may link to third-party websites and services that are outside our control. We are not responsible for the security or privacy of any information collected by these third parties, which operate pursuant to their respective privacy policies. THIRD PARTY COOKIES AND TRACKING TECHNOLOGIES Cookies are small text files that are stored in your device’s browser when you visit a website that enable the business that places the cookie business to recognize a user across one or more browsing sessions, and across one or more websites. When you use the Services, we and our third-party partners use cookies, pixel tags, device IDs and other similar technologies (collectively, “Cookies”) to collect information from your browser or device for the purposes of information storage and access; personalization of the Services; measurement of and analytics regarding the use of the Services; content selection, delivery, and reporting; and advertising selection, targeting, delivery, and reporting. By using the Services, you consent to our use of cookies and similar technologies. The following types of Cookies are used in the Services: Essential Cookies Essential Cookies enable you to browse our Services and use certain features. Disabling Essential Cookies may prevent you from using certain parts of the Services. These cookies also help keep our Services safe and secure. Preference Cookies Preference Cookies store information such as your login data, if applicable, and website preferences. Disabling Preference Cookies may hinder our ability to remember certain choices you’ve previously made or personalize your browsing experience by providing you with relevant information. Preference cookies can also be used to recognize your device so that you do not have to provide the same information more than once. Performance Cookies Performance Cookies collect information about how you use the Services such as which pages you visit regularly. Performance cookies are used to provide you with a high-quality experience by doing things such as tracking page load, site response times, and error messages. Content and Advertising Cookies Content and Advertising Cookies gather information about your use of our services so we provide you with more relevant content and advertising on the Services and elsewhere online and across your devices. Content and Advertising Cookies are also used to gather feedback on customer satisfaction through surveys. They remember that you’ve visited the Services and help Chart understand usage of the Services. Some Content and Advertising cookies are from third parties that collect information about your use of our Services to provide advertising (on our Services and elsewhere, across your different devices) based on your online activities (so-called “interest-based advertising”). Charter may not have access to these cookies, although we may use statistical information arising from the cookies provided by these third parties to customize content and for the other purposes described above. Please note: Charter does not control the privacy practices of these third parties, and their practices are not covered by this Privacy Policy. YOUR CHOICES There are several ways to minimize tracking of your online activity by third parties, some of which we have summarized below. We hope you find this information to be a helpful reference. Please note: using these tools to opt out of tracking and targeting does not mean that you will not receive advertising while using our Services or on other websites. Controls for Cookies and Online Tracking Choices Since many of these opt-out tools are specific to a device or browser, you will need to opt out on every browser and device that you use. Blocking Cookies in Your Browser . Most browsers let you remove or reject cookies, including cookies used for interest-based advertising. To do this, follow the instructions in your browser settings. Many browsers accept cookies by default until you change your settings. If you wish to opt-out of Google Analytics’ tracking, use this browser add-on provided by Google . Blocking advertising ID use in your mobile settings. Your mobile device settings may provide functionality to limit use of the advertising ID associated with your mobile device for interest-based advertising purposes. For more information about how to change these settings for Apple, Android or Windows devices, see: Apple: http://support.apple.com/kb/HT4228 Android: http://www.google.com/policies/technologies/ads/ Windows: http://choice.microsoft.com/en-US/opt-out Using privacy plug-ins or browsers . You also may use a browser with privacy features, like Brave, or install browser plugins like Privacy Badger , Ghostery or uBlock Origin . These may offer tools to block or limit third-party cookies/trackers. Platform opt-outs. The following advertising platforms offer opt-out features that let you opt-out of certain uses of your information for interest-based advertising: Google and Facebook Advertising industry opt-out tools. You can use these opt-out options to limit use of your interest-based advertising by participating companies: Digital Advertising Alliance and Network Advertising Initiative . Please note: opting-out of advertising networks’ tracking and targeting does not mean that you will not receive advertising while using our Services or on other websites, nor will it prevent the receipt of interest-based advertising from third parties that do not participate in these programs. It will exclude you, however, from interest-based advertising conducted through participating networks, as provided by their policies and choice mechanisms. Accessing Your Information You can request to access, review, correct, update, delete or modify your registration or subscription profile information (if Charter maintains such information) and modify your marketing preferences (where applicable) by contacting privacy@charterworks.com. Please note: if you have subscribed or registered for multiple of our Services or subscriptions, you may need to update your information for each account separately. Emails, Newsletters, and Text Messages You may always opt-out of receiving future e-mail marketing messages and newsletters from Charter Works by following the instructions contained within the emails and newsletters, or by e-mailing us at privacy@charterworks.com. You may opt out of receiving promotions or advertising via Text Message at any time, by replying “STOP” to one of our Text Messages. Responding To Requests For your protection, we may only implement requests with respect to the personal information associated with the email address that you use to send us your request and/or on the basis of other information we use to verify you before implementing your request. Please note: we may need to retain certain information for record-keeping purposes and/or to complete any transactions you began prior to requesting such change or deletion (e.g., when you make a purchase or enter a promotion, you may not be able to change or delete the personal information provided until after the completion or cancelation of such purchase or promotion). FOR READERS OUTSIDE THE US Charter Works is a US-based news organization, so we apply US law to our privacy practices. This means that wherever you are in the world, this Privacy Policy will apply to the information you provide to Charter or we collect when you use the Services. OTHER INFORMATION Security We take reasonable security measures to protect your information, including the use of physical, technical, and administrative controls. Please understand, however, that while we try our best to safeguard your personal information once we receive it, no transmission of data over the Internet or any other public network can be guaranteed to be 100% secure. You need to help protect the privacy of your own information. You must take precautions to protect the security of any personal information that you may transmit over any home networks, wireless routers, wireless (WiFi) networks or similar devices by using encryption and other techniques to prevent unauthorized persons from intercepting or receiving any of your personal information. You are responsible for the security of your information when using unencrypted, open access, or otherwise unsecured networks. Storage The period for which we keep information varies according to the purpose for which it is used. In some cases, there are legal requirements to keep data for a minimum period. We will retain your Personal Data for the period necessary to fulfill the purposes outlined in this Privacy Policy unless a longer retention period is required or allowed by law. Children’s Information The Services are not intended for children under 13 years of age. Charter Works does not knowingly collect personal information from children under 13 years of age. If Charter Works discovers that a child under the age of 13 has provided Charter Works with personal information and we do not have parental consent, Charter Works will delete that child’s information. If you believe that company has been provided with the personal information of a child under the age of 13 without parental consent, please notify us immediately at privacy@charterworks.com Questions If you have questions about our Privacy Policy, please contact us at privacy@charterworks.com Insights Books Interviews Charter on TIME Research Connect Events Topics Artificial Intelligence Hybrid Work DEI Leadership Charter Pro Become a Member Support Sign In Search Contact Partnerships General Inquiries Company About Careers Press Newsletters Charter Briefing Charter Works Inc. © 2025 Privacy Terms of Service
2026-01-13T09:30:39
https://github.com/python/cpython/commits?author=tirkarthi
Commits · python/cpython · GitHub Skip to content Navigation Menu Toggle navigation Sign in Appearance settings Platform AI CODE CREATION GitHub Copilot Write better code with AI GitHub Spark Build and deploy intelligent apps GitHub Models Manage and compare prompts MCP Registry New Integrate external tools DEVELOPER WORKFLOWS Actions Automate any workflow Codespaces Instant dev environments Issues Plan and track work Code Review Manage code changes APPLICATION SECURITY GitHub Advanced Security Find and fix vulnerabilities Code security Secure your code as you build Secret protection Stop leaks before they start EXPLORE Why GitHub Documentation Blog Changelog Marketplace View all features Solutions BY COMPANY SIZE Enterprises Small and medium teams Startups Nonprofits BY USE CASE App Modernization DevSecOps DevOps CI/CD View all use cases BY INDUSTRY Healthcare Financial services Manufacturing Government View all industries View all solutions Resources EXPLORE BY TOPIC AI Software Development DevOps Security View all topics EXPLORE BY TYPE Customer stories Events &amp; webinars Ebooks &amp; reports Business insights GitHub Skills SUPPORT &amp; SERVICES Documentation Customer support Community forum Trust center Partners Open Source COMMUNITY GitHub Sponsors Fund open source developers PROGRAMS Security Lab Maintainer Community Accelerator Archive Program REPOSITORIES Topics Trending Collections Enterprise ENTERPRISE SOLUTIONS Enterprise platform AI-powered developer platform AVAILABLE ADD-ONS GitHub Advanced Security Enterprise-grade security features Copilot for Business Enterprise-grade AI features Premium Support Enterprise-grade 24/7 support Pricing Search or jump to... Search code, repositories, users, issues, pull requests... --> Search Clear Search syntax tips Provide feedback --> We read every piece of feedback, and take your input very seriously. Include my email address so I can be contacted Cancel Submit feedback Saved searches Use saved searches to filter your results more quickly --> Name Query To see all available qualifiers, see our documentation . Cancel Create saved search Sign in Sign up Appearance settings Resetting focus You signed in with another tab or window. Reload to refresh your session. You signed out in another tab or window. Reload to refresh your session. You switched accounts on another tab or window. Reload to refresh your session. Dismiss alert {{ message }} python / cpython Public Uh oh! There was an error while loading. Please reload this page . Notifications You must be signed in to change notification settings Fork 33.9k Star 71k Code Issues 5k+ Pull requests 2.1k Actions Projects 31 Security Uh oh! There was an error while loading. Please reload this page . Insights Additional navigation options Code Issues Pull requests Actions Projects Security Insights Commits Branch selector   main User selector tirkarthi Datepicker All time Commit History Commits on Sep 7, 2023 bpo-38157: Add example about per file output for mock_open. (#16090) Show description for e183a71 4 people authored e183a71 Copy full SHA for e183a71 Commits on Aug 24, 2021 bpo-43826: Fix resource warning due to unclosed objects. (GH-25381) tirkarthi authored 7179930 Copy full SHA for 7179930 Commits on Apr 14, 2021 bpo-43825: Fix deprecation warnings in test_cmd_line and test_collections (GH-25380) Show description for b8509ff tirkarthi authored b8509ff Copy full SHA for b8509ff Commits on Apr 13, 2021 bpo41515: Fix assert in test which throws SyntaxWarning. (#25379) tirkarthi authored eb77133 Copy full SHA for eb77133 Commits on Apr 12, 2021 bpo-41515: Fix KeyError raised in get_type_hints (GH-25352) Show description for a9cf69d 3 people authored a9cf69d Copy full SHA for a9cf69d Commits on Jul 31, 2020 bpo-40360: Handle PendingDeprecationWarning in test_lib2to3. (GH-21694) tirkarthi authored cadda52 Copy full SHA for cadda52 Commits on Apr 28, 2020 bpo-39966: Revert &quot;bpo-25597: Ensure wraps&#x27; return value is used for magic methods in MagicMock&quot; (GH-19734) Show description for 521c8d6 tirkarthi authored 521c8d6 Copy full SHA for 521c8d6 Commits on Apr 18, 2020 bpo-35113: Fix inspect.getsource to return correct source for inner classes (#10307) Show description for 696136b tirkarthi authored 696136b Copy full SHA for 696136b Commits on Mar 11, 2020 bpo-39915: Ensure await_args_list is updated according to the order in which coroutines were awaited (GH-18924) Show description for e553f20 tirkarthi authored e553f20 Copy full SHA for e553f20 Commits on Jan 27, 2020 bpo-25597: Ensure wraps&#x27; return value is used for magic methods in MagicMock (#16029) tirkarthi authored and cjw296 committed 72b1004 Copy full SHA for 72b1004 Commits on Jan 24, 2020 bpo-38473: Handle autospecced functions and methods used with attach_mock (GH-16784) tirkarthi authored and cjw296 committed 66b00a9 Copy full SHA for 66b00a9 Commits on Jan 15, 2020 Improve test coverage for AsyncMock. (GH-17906) Show description for 54f743e tirkarthi authored and cjw296 committed 54f743e Copy full SHA for 54f743e Commits on Jan 13, 2020 bpo-39299: Add more tests for mimetypes and its cli. (GH-17949) Show description for d8efc14 tirkarthi authored d8efc14 Copy full SHA for d8efc14 Commits on Jan 11, 2020 Fix host in address of socket.create_server example. (GH-17706) Show description for 43682f1 tirkarthi authored 43682f1 Copy full SHA for 43682f1 Commits on Jan 9, 2020 Add test cases for dataclasses. (#17909) Show description for eef1b02 tirkarthi authored and ericvsmith committed eef1b02 Copy full SHA for eef1b02 Commits on Dec 15, 2019 bpo-39033: Fix NameError in zipimport during hash validation (GH-17588) Show description for 79f02fe tirkarthi authored and ncoghlan committed 79f02fe Copy full SHA for 79f02fe Commits on Dec 13, 2019 bpo-36406: Handle namespace packages in doctest (GH-12520) tirkarthi authored and brettcannon committed 8289e27 Copy full SHA for 8289e27 Commits on Sep 14, 2019 bpo-33095: Add reference to isolated mode in -m and script option (GH-7764) Show description for bdd6945 2 people authored and ncoghlan committed bdd6945 Copy full SHA for bdd6945 Commits on Sep 13, 2019 bpo-12144: Handle cookies with expires attribute in CookieJar.make_cookies (GH-13921) Show description for bb41147 tirkarthi authored and miss-islington committed bb41147 Copy full SHA for bb41147 bpo-36889: Document Stream class and add docstrings (GH-14488) Show description for d31b315 tirkarthi authored and miss-islington committed d31b315 Copy full SHA for d31b315 Commits on Sep 12, 2019 Fix the ImportWarning regarding __spec__ and __package__ being None (GH-16003) tirkarthi authored and brettcannon committed 6e1a30b Copy full SHA for 6e1a30b bpo-38120: Fix DeprecationWarning in test_random for invalid type of arguments to random.seed. (GH-15987) tirkarthi authored and serhiy-storchaka committed a06d683 Copy full SHA for a06d683 Commits on Sep 11, 2019 bpo-36528: Remove duplicate re tests. (GH-2689) Show description for e6557d3 2 people authored and benjaminp committed e6557d3 Copy full SHA for e6557d3 bpo-37651: Document CancelledError is now a subclass of BaseException (GH-15950) Show description for 7b69069 tirkarthi authored and miss-islington committed 7b69069 Copy full SHA for 7b69069 bpo-35603: Add a note on difflib table header interpreted as HTML (GH-11439) tirkarthi authored and JulienPalard committed c78dae8 Copy full SHA for c78dae8 bpo-32972: Document IsolatedAsyncioTestCase of unittest module (GH-15878) Show description for 6a9fd66 tirkarthi authored and miss-islington committed 6a9fd66 Copy full SHA for 6a9fd66 Commits on Sep 10, 2019 bpo-37052: Add examples for mocking async iterators and context managers (GH-14660) Show description for c8dfa73 tirkarthi authored and miss-islington committed c8dfa73 Copy full SHA for c8dfa73 Commits on Sep 9, 2019 bpo-37212: Preserve keyword argument order in unittest.mock.call and error messages (GH-14310) tirkarthi authored and zware committed 9d60706 Copy full SHA for 9d60706 Fix assertions regarding magic methods function body that was not executed (GH-14154) tirkarthi authored and lisroach committed aa51508 Copy full SHA for aa51508 Commits on Aug 29, 2019 bpo-36871: Ensure method signature is used when asserting mock calls to a method (GH13261) Show description for c961278 tirkarthi authored and cjw296 committed c961278 Copy full SHA for c961278 Commits on Jul 22, 2019 bpo-21478: Record calls to parent when autospecced objects are used as child with attach_mock (GH 14688) Show description for 7397cda tirkarthi authored and cjw296 committed 7397cda Copy full SHA for 7397cda Commits on Jul 13, 2019 bpo-37579: Improve equality behavior for pure Python datetime and time (GH-14726) Show description for e6b46aa tirkarthi authored and pganssle committed e6b46aa Copy full SHA for e6b46aa Commits on Jun 25, 2019 bpo-37392: Update the dir(sys) in module tutorial (GH-14365) tirkarthi authored and vstinner committed 080b6b4 Copy full SHA for 080b6b4 Commits on Jun 24, 2019 bpo-36889: Document asyncio Stream and StreamServer (GH-14203) tirkarthi authored and asvetlov committed 6793cce Copy full SHA for 6793cce Commits on Jun 22, 2019 bpo-37323: Suppress DeprecationWarning raised by @asyncio.coroutine (GH-14293) Show description for 186f709 tirkarthi authored and miss-islington committed 186f709 Copy full SHA for 186f709 Pagination Previous Next Footer &copy; 2026 GitHub,&nbsp;Inc. Footer navigation Terms Privacy Security Status Community Docs Contact Manage cookies Do not share my personal information You can’t perform that action at this time.
2026-01-13T09:30:39
https://www.timeforkids.com/g56/topics/music-and-theater/
TIME for Kids | Music and Theater | Topic | G5-6 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Music and Theater United States End of the Eras December 19, 2024 Taylor Swift just wrapped up the highest-grossing concert tour of all time. In December 2023, the pop star’s Eras Tour became the first concert tour to make more than a billion dollars. But Swift still had months to go. The… Audio Time Off The Eras Tour: By the Numbers December 21, 2023 Swift’s Eras Tour is making history. It’s set to land on five continents. Each performance is a journey through the star’s 17-year career, with a 40-plus-song set list that pulls from her studio albums. There are also some surprise songs every night.… Audio Arts Musical Nature January 5, 2023 Do you play an instrument? Have you ever made one? Terje Isungset can answer yes to both questions. He’s a drummer, and he also makes instruments out of something pretty cool: ice! He doesn’t use just any frozen stuff. “You… Audio Arts Music for the People December 15, 2022 Gustavo Dudamel returns to the stage for a second encore. It’s opening night for the Los Angeles Philharmonic, in California. As the crowd spots the conductor, the volume inside the Walt Disney Concert Hall surges. Dudamel steps to the podium,… Audio Spanish Arts Broadway is Back September 1, 2021 Don Darryl Rivera recently visited his dressing room at the New Amsterdam Theatre, in New York City. It was the first time he’d been there in almost a year and a half. Nothing had changed. The makeup towel and brushes… Audio Arts The Kid Report: Broadway is Back September 1, 2021 The story “Broadway is Back”—about the reopening of live theater in New York City—appears in this week’s issue of TIME for Kids. Below, TFK Kid Reporter David Murtagh shares his perspective. David writes about his conversation with Broadway historian Jennifer… Audio Time Off Musical Revolution July 2, 2020 A new live musical is coming to Disney+. It’s Hamilton, the Broadway show that premiered in 2015. The show is about Alexander Hamilton, the first treasury secretary of the United States. It begins with Hamilton’s arrival in New York from… Audio Time Off Making Music June 26, 2020 Daniel Tashian is a songwriter. He’s written songs for famous musicians such as Kacey Musgraves and Brett Eldredge. This month, he released Mr. Moonlight. It’s a new album he wrote with his three daughters: Tigerlily, Tinkerbell, and Matilda. The album… Audio Time Off From Our House to Yours April 22, 2020 Lincoln Center for the Performing Arts, in New York City, is a world-renowned cultural institution. It’s a showcase for the best in theater, concerts, and dance. Lincoln Center is temporarily closed because of the coronavirus pandemic. But now, kids can… Audio Time Off Coding Heroes February 19, 2020 Kids’ creations come to life in a new computer game called SuperMe. The game was designed by students in Chicago Public Schools. To make it, they had to learn how to code. They also drew their own superheroes. Then they… Audio Posts pagination 1 2 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/government/
TIME for Kids | Government | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Government United States A President&#039;s Day December 20, 2023 Welcome to the White House! The president of the United States lives and works here. The White House is in Washington, D.C. Take a look around. See the work that presidents do. And see how they spend their time off.… Audio Spanish United States Who Is in Charge? March 10, 2022 Countries, states, and cities all have leaders. Read on to learn about different leaders in the United States The president leads the country. The president is the leader of the United States government. Joe Biden is the current president.… Audio Community A Mayor&#039;s Job March 10, 2022 A mayor is the leader of a city or town. A mayor is elected by a town’s voters. Keep reading to learn more about a mayor’s job. Mayors manage workers. Mayors help run city departments. These include the police and… Audio Spanish United States Protecting Our Country October 14, 2021 The United States military has different branches, or groups. Each group helps protect the country. Read on to learn more about five of them. Ready to Respond The Marines are an emergency force. They respond to problems quickly. Marines watch… Audio Spanish United States This is Congress March 11, 2021 The United States government has three branches. Congress is the legislative branch. It makes laws. Congress is made up of the Senate and the House of Representatives. Take a look! This is the U.S. Capitol building. It is in Washington,… Audio Spanish United States Take a Tour March 11, 2021 There are many historic buildings and monuments in Washington, D.C. Learn about some of them on this map. 1. This is The White House. The president and his or her family live here. It is where the president works. 2.… Audio United States Biden Wins November 9, 2020 Joe Biden has won the 2020 presidential election. He is a former vice president of the United States. He will become the 46th U.S. president. Election Day was November 3. Four days later, the Associated Press made an announcement. Joe… Audio United States Who Will Be President? September 10, 2020 On November 3, Americans will vote for president of the United States. They might vote for Donald Trump. He is president now. They might vote for Joe Biden. He hopes to become president. Donald Trump President Donald Trump was born… Audio Spanish United States Money Matters January 30, 2020 People use money to pay for things. There are many different ways to do this. Long ago, people used animals and objects to pay. Today, people use smartphones and other methods. Which of these ways to pay have you seen?… Audio Spanish United States What&#039;s in a Dollar? January 30, 2020 An American dollar is a paper bill. How much is a dollar worth? See how these coins add up to make a dollar. These are quarters. There are four quarters in one dollar. These are dimes. There are… Audio Posts pagination 1 2 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.timeforkids.com/k1/topics/engineering/
TIME for Kids | Engineering | Topic | K-1 Skip to main content Search Articles by Grade level Grades K-1 Articles Grade 2 Articles Grades 3-4 Articles Grades 5-6 Articles Topics Animals Arts Ask Angela Books Business Careers Community Culture Debate Earth Science Education Election 2024 Engineering Environment Food and Nutrition Games Government History Holidays Inventions Movies and Television Music and Theater Nature News People Places Podcasts Science Service Stars Space Sports The Human Body The View Transportation Weather World Young Game Changers Your $ Financial Literacy Content Grade 4 Edition Grade 5-6 Edition For Grown-ups Resource Spotlight Also from TIME for Kids: Log In role: none user_age: none editions: The page you are about to enter is for grown-ups. Enter your birth date to continue. Month (MM) 01 02 03 04 05 06 07 08 09 10 11 12 Year (YYYY) 1926 1927 1928 1929 1930 1931 1932 1933 1934 1935 1936 1937 1938 1939 1940 1941 1942 1943 1944 1945 1946 1947 1948 1949 1950 1951 1952 1953 1954 1955 1956 1957 1958 1959 1960 1961 1962 1963 1964 1965 1966 1967 1968 1969 1970 1971 1972 1973 1974 1975 1976 1977 1978 1979 1980 1981 1982 1983 1984 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 2003 2004 2005 2006 2007 2008 2009 2010 2011 2012 2013 2014 2015 2016 2017 2018 2019 2020 2021 2022 2023 2024 2025 2026 Submit Engineering Business My Cool Job: Roller Coaster Engineer May 21, 2021 Larry Chickola is the vice president and chief corporate engineer of Six Flags theme parks. He spoke to TFK’s Rebecca Mordechai about his job. I’ve always loved theme parks. Growing up, I would visit Cedar Point amusement park. That’s in… Audio Technology Let&#039;s Go Solar! October 15, 2020 Have you noticed solar panels on top of a building? They use the sun’s energy. They turn it into electricity that people can use. Take a look! 1. Sunlight hits the solar panels. 2. The panels have tiny… Audio Technology Get a Lift! April 17, 2020 A pulley uses a wheel and a rope, cord, or chain. Pulleys help us lift things that are heavy. Here are some examples of pulleys at work. Setting Sail Sailboats use pulleys. Pulleys help raise the sails, turn the boat,… Audio Spanish Technology New Heights April 17, 2020 The world is full of incredible elevators. Many of them use pulleys. Here are a few of TFK’ s favorite elevators. The world’s fastest elevator is in Shanghai Tower, in China. It travels almost as fast as a cheetah can.… Audio Video Technology Meet an Engineer April 26, 2019 Roads don’t just appear by magic. Neither do sidewalks or traffic lights. These things are carefully planned by engineers. An engineer is a person who designs and builds things. YUNG KOPROWSKI (pictured) is an engineer. She works in transportation. She… Technology Mighty Drones April 15, 2019 A drone is a type of aircraft. Some drones are tiny. Others are the size of an airplane. But drones are very different from airplanes. For one thing, they don’t carry people. Here’s the scoop on drones. Ready for Takeoff… Audio Video Spanish Technology A Lifesaving Drone April 15, 2019 Drones can save lives. The drone in this picture is a Zipline drone. It is being launched. A Zipline drone carries medical supplies. It takes them to people who are sick and far from help. A young girl in Africa… Technology Ready, Set, Code! November 29, 2018 Computers need instructions. That’s where coders come in. Coders write programs. Programs tell a computer what to do. Here’s the scoop on coding. Problem Solvers Coders solve problems. They work together. They are creative. Coders are sometimes called programmers. A… Audio Spanish World Cities of the Future November 23, 2018 People are building a new kind of city. This type of city will not be on land. It will float on the ocean. The first floating city will be in French Polynesia. That is in the South Pacific Ocean. Construction… World Where in the World? April 16, 2018 The world is full of amazing landmarks. Some are tall. Some are long. Some are thousands of years old. Learn about famous structures in five countries. Which structure is your favorite? The United States The Gateway Arch is in St.… Audio Spanish Posts pagination 1 2 Next Contact us Privacy policy California privacy Terms of Service Subscribe CLASSROOM INTERNATIONAL &copy; 2026 TIME USA, LLC. All Rights Reserved. Powered by WordPress.com VIP
2026-01-13T09:30:39
https://www.php.net/manual/ru/ref.tidy.php
PHP: Tidy - Manual update page now Downloads Documentation Get Involved Help Search docs Getting Started Introduction A simple tutorial Language Reference Basic syntax Types Variables Constants Expressions Operators Control Structures Functions Classes and Objects Namespaces Enumerations Errors Exceptions Fibers Generators Attributes References Explained Predefined Variables Predefined Exceptions Predefined Interfaces and Classes Predefined Attributes Context options and parameters Supported Protocols and Wrappers Security Introduction General considerations Installed as CGI binary Installed as an Apache module Session Security Filesystem Security Database Security Error Reporting User Submitted Data Hiding PHP Keeping Current Features HTTP authentication with PHP Cookies Sessions Handling file uploads Using remote files Connection handling Persistent Database Connections Command line usage Garbage Collection DTrace Dynamic Tracing Function Reference Affecting PHP's Behaviour Audio Formats Manipulation Authentication Services Command Line Specific Extensions Compression and Archive Extensions Cryptography Extensions Database Extensions Date and Time Related Extensions File System Related Extensions Human Language and Character Encoding Support Image Processing and Generation Mail Related Extensions Mathematical Extensions Non-Text MIME Output Process Control Extensions Other Basic Extensions Other Services Search Engine Extensions Server Specific Extensions Session Extensions Text Processing Variable and Type Related Extensions Web Services Windows Only Extensions XML Manipulation GUI Extensions Keyboard Shortcuts ? This help j Next menu item k Previous menu item g p Previous man page g n Next man page G Scroll to bottom g g Scroll to top g h Goto homepage g s Goto search (current page) / Focus search box ob_tidyhandler &raquo; &laquo; tidyNode::isText Руководство по PHP Справочник функций Другие базовые модули Tidy Язык: English German Spanish French Italian Japanese Brazilian Portuguese Russian Turkish Ukrainian Chinese (Simplified) Other Tidy Содержание ob_tidyhandler — Функция обратного вызова ob_start для восстановление буфера tidy_access_count — Возвращает число доступных предупреждений Tidy, встретившихся в рассмотренном документе tidy_config_count — Возвращает число ошибок конфигурации Tidy, встретившихся при рассмотрении документа tidy_error_count — Возвращает число ошибок Tidy, встретившихся при рассмотрении документа tidy_get_output — Возвращает строку, представляющую разобранную tidy-разметку tidy_warning_count — Возвращает число Tidy-предупреждений, встреченных в указанном документе Нашли ошибку? Инструкция • Исправление • Сообщение об ошибке + Добавить Примечания пользователей 2 notes up down 2 patatraboum at nospam dot fr &para; 19 years ago &lt;?php // //The tidy tree of your favorite ! //For PHP 5 (CGI) //Thanks to john@php.net // $file = " http://www.php.net " ; // $cns = get_defined_constants ( true ); $tidyCns =array( "tags" =&gt;array(), "types" =&gt;array()); foreach( $cns [ "tidy" ] as $cKey =&gt; $cVal ){ if( $cPos = strpos ( $cKey , $cStr = "TAG" )) $tidyCns [ "tags" ][ $cVal ]= " $cStr : " . substr ( $cKey , $cPos + strlen ( $cStr )+ 1 ); elseif( $cPos = strpos ( $cKey , $cStr = "TYPE" )) $tidyCns [ "types" ][ $cVal ]= " $cStr : " . substr ( $cKey , $cPos + strlen ( $cStr )+ 1 ); } $tidyNext =array(); // echo "&lt;html&gt;&lt;head&gt;&lt;meta http-equiv='Content-Type' content='text/html; charset=windows-1252'&gt;&lt;title&gt;Tidy Tree :: $file &lt;/title&gt;&lt;/head&gt;" ; echo "&lt;body&gt;&lt;pre&gt;" ; // tidyTree ( tidy_get_root ( tidy_parse_file ( $file )), 0 ); // function tidyTree ( $tidy , $level ){ global $tidyCns , $tidyNext ; $tidyTab =array(); $tidyKeys =array( "type" , "value" , "id" , "attribute" ); foreach( $tidy as $pKey =&gt; $pVal ){ if( in_array ( $pKey , $tidyKeys )) $tidyTab [ array_search ( $pKey , $tidyKeys )]= $pVal ; } ksort ( $tidyTab ); foreach( $tidyTab as $pKey =&gt; $pVal ){ switch( $pKey ){ case 0 : if( $pVal == 4 ) $value = true ; else $value = false ; echo indent ( true , $level ). $tidyCns [ "types" ][ $pVal ]. "\n" ; break; case 1 : if( $value ){ echo indent ( false , $level ). "VALEUR : " . str_replace ( "\n" , "\n" . indent ( false , $level ), $pVal ). "\n" ; } break; case 2 : echo indent ( false , $level ). $tidyCns [ "tags" ][ $pVal ]. "\n" ; break; case 3 : if( $pVal != NULL ){ echo indent ( false , $level ). "ATTRIBUTS : " ; foreach ( $pVal as $aKey =&gt; $aVal ) echo " $aKey = $aVal " ; echo "\n" ; } } } if( $tidy -&gt; hasChildren ()){ $level ++; $i = 0 ; $tidyNext [ $level ]= true ; echo indent ( false , $level ). "\n" ; foreach( $tidy -&gt; child as $child ){ $i ++; if( $i == count ( $tidy -&gt; child )) $tidyNext [ $level ]= false ; tidyTree ( $child , $level ); } } else echo indent ( false , $level ). "\n" ; } // function indent ( $tidyType , $level ){ global $tidyNext ; $indent = "" ; for( $i = 1 ; $i &lt;= $level ; $i ++){ if( $i &lt; $level ||! $tidyType ){ if( $tidyNext [ $i ]) $str = "| " ; else $str = " " ; } else $str = "+--" ; $indent = $indent . $str ; } return $indent ; } // echo "&lt;/pre&gt;&lt;/body&gt;&lt;/html&gt;" ; // ?&gt; up down 0 bill dot mccuistion at qbopen dot com &para; 21 years ago Installing tidy on Fedora Core 2 required three libraries: tidy... tidy-devel... libtidy... All of which I found at http://rpm.pbone.net Then, finally, could "./configure --with-tidy" Hope this helps someone out. This was "REALLY" hard (for me) to figure out as no where else was clearly documented. + Добавить Tidy Введение Установка и настройка Предопределённые константы Примеры tidy tidyNode Tidy Copyright &copy; 2001-2026 The PHP Documentation Group My PHP.net Contact Other PHP.net sites Privacy policy ↑ and ↓ to navigate • Enter to select • Esc to close • / to open Press Enter without selection to search using Google
2026-01-13T09:30:39