关键词[������]相关搜索结果,共搜索到3759条结果

养猪日记 2022.1.13

Thursday 晴今天和🐖打电话啦,开心。5道算法题,一节项目课,项目还差一点收尾,本来今天就能完事的,可我看了部电影,《黑客帝国4》。一月份快过去一半了,时间还是挺紧的,有一点焦虑,一点点。香香🐖晚安。🐖宝宝晚安。1:50  安达

养猪日记 2022.1.12

Wednesday 晴今天又没和🐖打上电话,可怜的乖宝。四道题,一节项目课。明天项目课就差不多能收工了,期盼已久的MySQL就要上线了。想🐖和🐖宝宝。1:13  安达

elementUI中MessageBox弹窗自动关闭

elementUI中MessageBox弹窗自动关闭🪶🪶🪶最近遇到一个问题,elementUI中的MessageBox需要在3秒后自动关闭,但是官网没有给出自动关闭的方法,百度找了好多,最后发现这个是可以使用的:this.$msgbox.close();

养猪日记 2022.2.5

Saturday 晴5道算法题,4节MySQL课,4节STL课。面试题:STL模板库:21~38.今天🐖又没说晚安就自己睡觉了,估计也没吃药。明天周赛,目标三道题。晚安🐖妈妈。晚安🐖宝宝。2:08 安达

养猪日记 2022.1.31

Monday补一下昨天的日记,当时好困,直接就睡啦。2道算法题,5节MySQL课,1节STL课。面试题:c++基础:66~80.昨晚和🐖一起打电话跨年,🐖给我写了小作文,嘿嘿。亲亲可爱滴🐖。

Markdown标记小图标

懒人适用的图标🌈❤️✨⭐❗❓❕❔✊✌️✋✋☝️☀️☔☁️❄️⛄⚡⛅☎️☎️⌛⏳⏰⌚➿✉️✉️✂️✒️✏️⚽⚾️⛳♠️♥️♣️♦️☕⛪⛺⛲⛵⛵⚓✈️⛽⚠️♨️1️⃣2️⃣3️⃣4️⃣5️⃣6️⃣7️⃣8️⃣9️⃣0️⃣#️⃣◀️⬇️▶️⬅️↙️↘️➡️⬆️↖️↗️⏬⏫⤵️⤴️↩️↪️↔️↕️⏪⏩ℹ️️️️♿㊙️㊗️Ⓜ️⛔✳️❇️✴️♈♉♊♋♌♍♎♏♐♑♒♓⛎❎️️️♻️©️®️™️❌❗‼️⁉️⭕✖️➕➖➗✔️☑️➰〰️〽️▪️▫️◾◽◼️◻️⬛⬜✅⚫⚪彩虹:🌈草莓:🍓电话:📞枫叶:🍁脚印:👣喇叭:📣大拇指:👍OK:👌红苹果:🍎青苹果:🍏日历:📆月历:📅时间:🕔手指:👉松树:🌲统计:📊问号:❓文件:📂西瓜:🍉邮箱:📧直播:🎦代码如下:语法:&#xCODE;直接复制&符号开头的代码即可,示例:🌈。bqcaihong,1=🌈bqcaomei,1=🍓bqdianhua,1=📞bqfengye,1=🍁bqjiaoyin,1=👣bqlaba,1=📣bqmuzhi,1=👍bqok,1=👌bqpg,1=🍎bqpg,2=🍏bqriqi,1=📆bqriqi,2=📅bqshijian,1=🕔bqshouzhi,1=👉bqsongshu,1=🌲bqtongji,1=📊bqwenhao,1=❓bqwenjianjia,1=📂bqxigua,1=🍉bqyouxiang,1=📧bqzhibo,1=🎦获得更多原文链接:https://zhuanlan.zhihu.com/p/147764147

【转】你未必知道的49个CSS知识点

原文:Https://Juejin.Cn/Post/684490390212339303201.【负边距】💘负边距的效果。注意左右负边距表现并不一致。左为负时,是左移,右为负时,是左拉。上下与左右类似02.【shape-outside】❤不要自以为是了。你以为自己是方的,在别人眼里你却是圆的03.【BFC应用】💓BFC应用之阻止外边距合并(margincollapsing)04.【BFC应用】💔BFC应用之消除浮动的影响05.【flex不为人知的特性之一】💕flex布局下margin:auto的神奇用法06.【flex不为人知的特性之二】💖flex布局,当flex-grow之和小于1时,只能按比例分配部分剩余空间,而不是全部07.【input的宽度】💗并不是给元素设置display:block就会自动填充父元素宽度。input就是个例外,其默认宽度取决于size特性的值08.【定位特性】💙绝对定位和固定定位时,同时设置left和right等同于隐式地设置宽度09.【层叠上下文】💚层叠上下文:小辈就是小辈,再厉害也只是个小辈10.【粘性定位】💛position:sticky,粘性定位要起作用,需要设置最后滞留位置。chrome有bug,firefox完美11.【相邻兄弟选择器】💜相邻兄弟选择器之常用场景12.【模态框】🖤要使模态框背景透明,用rgba是一种简单方式13.【三角形】💝css绘制三角形的原理14.【table布局】💞display:table实现多列等高布局15.【颜色对比度】❣蓝底红字,由于颜色对比度比较低,故而看不清,因此不是好的配色方案😂16.【定宽高比】♥css实现定宽高比的原理:padding的百分比是相对于其包含块的宽度,而不是高度17.【动画方向】🐹动画方向可以选择alternate,去回交替进行18.【线性渐变应用】🐮css绘制彩带的原理19.【隐藏文本】🐯隐藏文字内容的两种办法20.【居中】🐰实现居中的一种简单方式21.【角向渐变】🐲新的渐变:角向渐变。可以用来实现饼图22.【背景位置百分比】🐍background-position百分比的正确理解方式:图片自身的百分比位置与容器同样的百分比位置重合23.【背景重复新值】🐴background-repeat新属性值:round和space。前者表示凑个整,后者表示留点缝24.【背景附着】🐐background-attachment指定背景如何附着在容器上,注意其属性值local和fixed的使用25.【动画延时】🐵动画添加延迟时间可以使步调不一致26.【outline使用】🐔可以使用outline来描边,不占地方,它甚至可以在里面27【背景定位】🐶当固定背景不随元素滚动时,背景定位是相对于视口的28【tab-size】🐷浏览器默认显示tab为8个空格,tab-size可以指定空格长度29【动画暂停】🥝CSS动画其实是可以暂停的30【object-fit】🍓图片在指定尺寸后,可以设置object-fit为contain或cover保持比例31【鼠标状态】🍒按钮禁用时,不要忘了设置鼠标状态32【背景虚化】🍑使用CSS滤镜实现背景虚化33【fill-available】🍏设置宽度为fill-available,可以使inline-block像block那样填充整个空间34【fit-content】🍎设置宽度为fit-content,可以使block像inline-block那样实现收缩宽度包裹内容的效果35【自定义属性】🍋CSS自定义属性的简单使用36【min-content/max-content】🍍可以设置宽度为min-content和max-content,前者让内容尽可能地收缩,后者让内容尽可能地展开37【进度条】🍊使用渐变,一个div实现进度条38【打印】🍉可以在打印网页时,设置page相关属性。比如page-break-before属性来表示是否需要另起新页39【逐帧动画】🍌利用CSS精灵实现逐帧动画40【resize】🍐普通元素也可以像textarea那样resize41【面包屑】🍇使用before伪元素实现面包屑42【stickyfooter】🍈使用grid布局实现stickyfooter43【动画填充状态】🍅CSS可以设置动画开始前和结束时所保持的状态44【动画负延迟】🥑CSS动画可以设置延迟时间为负数,表示动画仿佛开始前就已经运行过了那么长时间45【过渡】🍆爱的魔力转圈圈46【动画案例】🍬水波效果原理47【动画案例】🌸CSS弹球动画效果的原理48【outline】🌻outline属性的妙用49【grid】💕火狐浏览器grid布局检测器希望有所帮助。也欢迎阅读本人的《JS正则迷你书》。本文完。

精品网站集锦,更新中~

导航类别🌏导航类悠悠国外网极客导航微页网站目录产品经理导航站长目录萌导航淘站目录设计导航有趣网址之家未完待续~🚀技术类cplusplusgit教程鸟哥Linux教程LoadRunnerLoadRunner(2)领测国际Linux就该这么学云+社区阿里Java技术图谱数据库开发者社区阿里云开发者社区美团技术团队未完待续~🔨工具类开发者搜索未完待续~💧资源类DOOOOR小众软件MSDN多多软件站未完待续~📰资讯类暂无未完待续~

养猪日记 2021.12.23

Thursday 晴我还以为今天是周三,原来已经周四了,明天又是QT日了。写了一道leetcode,看了四节项目课,晚上写了一会Qt。🐖给小姐妹买了一只狗狗玩偶,超级可爱,🐖看我也很喜欢,就给我也买了一只~明天晚上就是平安夜了,当时送了🐖一个平安果,也是🐖第一次正式见到我~时间过得好快阿,和我的🐖已经快一年了。原来明天就周五了,害,感觉这周被偷走了一天。0:35 正心415

养猪日记 2021.12.19

Sunday 晴今天写了两道leetcode,一道kickstart,看了三节项目课,把第二章看完了。阿秀笔记看到C++基础语法第13题。🐖晚上才出来,🐖宅在寝室吃鸡公煲,🐖不想我。去年的今天加了🐖的QQ好友,我想🐖。想参加个leetcode周赛玩玩,但怕一道也做不出来打击太大。做做以前的真题,找点感觉报名一次玩玩。0:22  正心419

stm32串口数据中断接收(DMA、IDLE中断)

p;    //??DMA??}  //ֱװbufferУ˴ֻǴDMA//οUSBcdc÷޸ΪдbufferʽvoidUart1_DMATranmist(u16nSendCount){  USART_DMACmd(USART1,USART_DMAReq_Tx,ENABLE); //ʹܴ1DMA    DmaSendDataProc(DMA2_Strea

Django答题系统,前端答题页面防刷新

☀️最近在写一个答题系统,当大部分后端逻辑完成时,高高兴兴的测试一波。后台批量导入数据,👍check,教师发布试卷,批量导入考试学生,👍check。等等.....Nice啊,成功了,就等优化了。成功了,那就体验一波吧。。。。🎓教师登录、创建试卷、导入学生信息。。。登录学生账号,进入考试。。选项,嗯。。可以。。。。中间电脑原因,浏览器刷新了一下。咦。。。。。❓❗❗❔❕❕重新计时了,考试记录没了。oh***💩******.......对于只会一点前端的我,一时竟不知如何下手。突然🌟🌟🌟灵光一闪,用js的localStorage本地缓存不就ok了吗,hhhhhhh👏👏👏👏首先就是答题记录的保存最简单,写一题存一题,页面刷新时,从本地缓存中读取存入的题上代码obj={};//存functioncheck(event,answer){letid="#"+event;obj[event.toString()]=answer;localStorage.setItem('obj',JSON.stringify(obj));$(id).attr("class",'answer_nocolor1');//答题卡样式改变,标记已经答过的题}//取//检查本地缓存是否有值functioncheckStorage(){returnJSON.stringify(localStorage)=="{}"?false:true;}//取值functiongetStorage(){letanswer_list=JSON.parse(localStorage.getItem('obj'));returnanswer_list;}//我的做法是,取到对应的题的对应选项,模拟点击functiongetStorage_answer(){letanswer_list=getStorage();for(letkinanswer_list){letcheck='div#m'+k.toString()+'.panel-body>ul>li>label>input#'+answer_list[k];$(check).click();}}//在页面加载时调用if(checkStorage()){getStorage_answer();}好,题解决了,那时间呢。。。也用缓存,不不不☝️☝️☝️,麻烦刚好,我加载页面时,后端将考试开始时间和考试时长传过来了,只需要计算考试结束时间与当前时间差,剩余倒计时不就出来了ok,这个问题解决了,接下来,就是对项目进一步优化了。。。。。。🏃🏃🏃

Codeforces Round #599 (Div. 1) C. Sum Balance 图论 dp

C.SumBalanceUjanhasalotofnumbersinhisboxes.Helikesorderandbalance,sohedecidedtoreorderthenumbers.Thereare𝑘boxesnumberedfrom1to𝑘.The𝑖-thboxcontains𝑛𝑖integernumbers.Theintegerscanbenegative.Alloftheintegersaredistinct.Ujanislazy,sohewilldothefollowingreorderingofthenumbersexactlyonce.Hewillpickasingleintegerfromeachoftheboxes,𝑘integersintotal.Thenhewillinsertthechosennumbers—oneintegerineachoftheboxes,sothatthenumberofintegersineachboxisthesameasinthebeginning.Notethathemayalsoinsertanintegerhepickedfromaboxbackintothesamebox.Ujanwillbehappyifthesumoftheintegersineachboxisthesame.Canheachievethisandmaketheboxesperfectlybalanced,likeallthingsshouldbe?InputThefirstlinecontainsasingleinteger𝑘(1≤𝑘≤15),thenumberofboxes.The𝑖-thofthenext𝑘linesfirstcontainsasingleinteger𝑛𝑖(1≤𝑛𝑖≤5000),thenumberofintegersinbox𝑖.Thenthesamelinecontains𝑛𝑖integers𝑎𝑖,1,…,𝑎𝑖,𝑛𝑖(|𝑎𝑖,𝑗|≤109),theintegersinthe𝑖-thbox.Itisguaranteedthatall𝑎𝑖,𝑗aredistinct.OutputIfUjancannotachievehisgoal,output"No"inasingleline.Otherwiseinthefirstlineoutput"Yes",andthenoutput𝑘lines.The𝑖-thoftheselinesshouldcontaintwointegers𝑐𝑖and𝑝𝑖.ThismeansthatUjanshouldpicktheinteger𝑐𝑖fromthe𝑖-thboxandplaceitinthe𝑝𝑖-thboxafterwards.Iftherearemultiplesolutions,outputanyofthose.Youcanprinteachletterinanycase(upperorlower).Examplesinput43174232285110outputYes722351104input223-22-15outputNoinput22-101020-20outputYes-102-201NoteInthefirstsample,Ujancanputthenumber7inthe2ndbox,thenumber2inthe3rdbox,thenumber5inthe1stboxandkeepthenumber10inthesame4thbox.Thentheboxeswillcontainnumbers{1,5,4},{3,7},{8,2}and{10}.Thesumineachboxthenisequalto10.Inthesecondsample,itisnotpossibletopickandredistributethenumbersintherequiredway.Inthethirdsample,onecanswapthenumbers−20and−10,makingthesumineachboxequalto−10.题意现在有k个箱子,每个箱子里面有若干个数,每个箱子里面的每个数都一定是不同的。现在你需要从每个盒子里面拿出一个数,然后再放到任意一个箱子里面去,使得每个箱子里面的数的个数保持和之前一样,且所有箱子的和都一样。问你是否存在这样的放法,且输出方案。题解所有数的和为sum的话,肯定sum%k==0才行。假设最后的和为FinalSum,假设从箱子i里面拿出来的数为c[i],i箱子的数的和为sum[i],那么箱子i拿出c[i]之后所需要的数为FinalSum-sum[i]+c[i]。我们建一个图,我们令c[i]为x,FinalSum-sum[i]+c[i]为y。边就是从x连向y的有向边,表示我们从这个箱子里面拿出x,需要放回y;那么这个图建立起来,一定是若干个环,而且环没有公共边。那么我们建立出若干个环之后,我们接下里的任务就是选择若干个环,使得这些环上面的点恰好为n个,且来自n个箱子。这个就是一个经典的dp了,枚举子集进行转移即可。代码#include<bits/stdc++.h>usingnamespacestd;constintmaxn=16;vector<pair<pair<int,longlong>,int>>ans[1<<maxn];map<longlong,int>color;vector<int>a[maxn];longlongsum[maxn];intk;intdp[1<<maxn];intmain(){scanf("%d",&k);longlongall_sum=0;for(inti=0;i<k;i++){intn;scanf("%d",&n);for(intj=0;j<n;j++){longlongx;scanf("%lld",&x);a[i].push_back(x);all_sum+=x;sum[i]+=x;color[a[i][j]]=i;}}if(all_sum%k!=0){puts("NO");return0;}all_sum/=k;for(inti=0;i<k;i++){for(intj=0;j<a[i].size();j++){longlongcur=a[i][j];intused=0;boolisOk=true;vector<pair<pair<int,longlong>,int>>an;do{intcl=0;autoit=color.find(cur);if(it!=color.end()){cl=it->second;}else{isOk=false;break;}if(used&(1<<cl)){isOk=false;break;}used|=(1<<cl);cur=cur+(all_sum-sum[cl]);autocl2=color.find(cur);if(cl2!=color.end()){an.push_back({{cl2->second,cur},cl});}//cout<<cur<<endl;//cout<<cur<<""<<cl<<""<<cl2<<endl;}while(cur!=a[i][j]);if(isOk){//cout<<"made!"<<used<<endl;dp[used]=1;ans[used]=std::move(an);}//cout<<"------"<<endl;}}for(inti=0;i<(1<<k);i++){if(dp[i])continue;for(intj=i;j>0;j=(j-1)&i){if(dp[j]&&dp[i&(~j)]){dp[i]=1;ans[i]=ans[j];for(auto&x:ans[i&(~j)]){ans[i].push_back(x);}break;}}}//for(inti=0;i<(1<<k);i++){//cout<<dp[i]<<endl;//}intx=(1<<k)-1;if(dp[x]){cout<<"Yes"<<endl;sort(ans[x].begin(),ans[x].end());for(autoa:ans[x]){cout<<a.first.second<<""<<a.second+1<<endl;;}}else{cout<<"No"<<endl;}}

双11互联网人最爱单品,cue中你了么?

2020双十一让我们摇身一变千亿大项目的贡献人!​小到缝衣针线,大到装修买房上到航空套票,下到衣着穿搭各种买买买​下面百晓生盘点了一波双11互联网人的最爱单品看看有没有cue中你~  运营小姐姐  👇👇👇扒最新的热点熬最深的夜写最动人的软文 ​ 为了保持年轻貌美眼霜必须买买买,绝不能给眼纹一丝存留的机会!  前端开发小哥哥  👇👇👇编最长的代码改最烦的BUG加最晚的班虽然没有五彩斑斓的生活但是五颜六色的格子衬衫可以有 ​ 今天赤橙黄绿青蓝紫全部安排上!  前台小妹妹 👇👇👇一年有365天我要美遍全年 ​ 为成为公司门面最靓的仔今天口红的最新色号必须统统拿下!  产品经理小哥哥 👇👇👇晚上不想睡白天睡不醒只靠咖啡来续命 ​  双十一屯了一年的咖啡这样最安心~  销售老大哥  👇👇👇上次饭局王总说这个白酒味道还不错客户的喜欢就是我最大的欢喜 ​ 这酒必须来两箱!  运维小哥哥  👇👇👇去年用的阿里云服务器一年都没出过什么问题运维简单,操作方便,太香了! ​ 双11,阿里云放出超低折扣1核2G突发性能t5仅需96.9元/年2核4G共享型s6仅需226.08元/年2核4G 计算型c6a仅需469.39元/年买到就是赚到!这货必须囤!阿里云服务器专场福利地址:https://www.aliyun.com/1111/ecs赶紧抓住双11优惠最后的小尾巴~ 关注百晓生,笑谈云计算​ 

postman 汉化版下载

Postman汉化版下载这里以win10为例,其他系统请阅读postman-cn原文:https://github.com/hlmd/Postman-cn中文包地址:https://github.com/hlmd/Postman-cn/releases下载地址:(根据中文包的最近版本选择你要下载的postman版本)最新版本下载官方下载页面下载链接:Win64Win32MacLinux-注意👇👇👇👇👇看下面👇👇👇👇👇注意历史版本下载请把下面链接的"版本号"替换为指定的版本号,例如:8.8.0Windows64位https://dl.pstmn.io/download/version/版本号/win64Windows32位https://dl.pstmn.io/download/version/版本号/win32Machttps://dl.pstmn.io/download/version/版本号/osxLinuxhttps://dl.pstmn.io/download/version/版本号/linux安装中文包下载对应版本的app.zip进入Postman安装地址/版本/resources目录默认安装地址:C:/Users/用户名/AppData/Local/Postman进入到C:/Users/用户名/AppData/Local/Postman/app-8.8.0/resources复制app.zip到resources目录解压出app文件夹重启postman

oeasy教您玩转linux010108到底哪个which

到底哪个which回忆上次内容😌我们上次讲了查找命令位置whereis我想找到whereis的位置怎么办?🤔whereiswhereis命令有三种位置二进制代码binary源代码source帮助手册manual我们找到了ls对应的源代码的位置,但是我们有的时候会面对这样的问题,一个命令有多条二进制代码和他对应.我们到使用的是哪个?🤔到底哪个?which🤔比如我们想知道我们使用的java在哪里?🙄whereisjava我只想查java的二进制文件whereis-bjava也有好多,到底哪个?🤔比如我们想知道我们使用的java在哪里?whichjava这样我们就得到了二进制里面的第一个,也就是我们执行命令时候对应硬盘的位置.我们来玩吧🤗各种命令都来当which的参数whichpwdwhichunamewhichwhatiswhichwhereis现在我们有了三个灵魂之问了✊whatis你是谁whereis你在哪which到底在哪通过这三个命令我们可以知道,任何命令的作用、位置,我们给这三个问号起名叫灵魂三问!👊灵魂三问我们来对cat命令,试试这个灵魂三问whatiscatwhereiscatwhichcat有了这三个命令我们就可以了解任何命令的基本信息了!我们再去问问什么命令呢?🤔下次再说!👋

oeasy教您玩转linux010108到底哪个which

回忆上次内容😌我们上次讲了查找命令位置whereis我想找到whereis的位置怎么办?🤔whereiswhereis命令有三种位置二进制代码binary源代码source帮助手册manual我们找到了ls对应的源代码的位置,但是我们有的时候会面对这样的问题,一个命令有多条二进制代码和他对应.我们到使用的是哪个?🤔到底哪个?which🤔比如我们想知道我们使用的java在哪里?🙄whereisjava我只想查java的二进制文件whereis-bjava也有好多,到底哪个?🤔比如我们想知道我们使用的java在哪里?whichjava这样我们就得到了二进制里面的第一个,也就是我们执行命令时候对应硬盘的位置.我们来玩吧🤗各种命令都来当which的参数whichpwdwhichunamewhichwhatiswhichwhereis现在我们有了三个灵魂之问了✊whatis你是谁whereis你在哪which到底在哪通过这三个命令我们可以知道,任何命令的作用、位置,我们给这三个问号起名叫灵魂三问!👊灵魂三问我们来对cat命令,试试这个灵魂三问whatiscatwhereiscatwhichcat有了这三个命令我们就可以了解任何命令的基本信息了!我们再去问问什么命令呢?🤔下次再说!👋上一章010107whereis参与制作去做实验下一章010109clear

Manthan, Codefest 19 (open for everyone, rated, Div. 1 + Div. 2) G. Polygons 数论

G.PolygonsDescriptionYouaregiventwointegers𝑛and𝑘.Youneedtoconstruct𝑘regularpolygonshavingsamecircumcircle,withdistinctnumberofsides𝑙between3and𝑛.Illustrationforthefirstexample.Youcanrotatethemtominimizethetotalnumberofdistinctpointsonthecircle.Findtheminimumnumberofsuchpoints.InputTheonlylineofinputcontainstwointegers𝑛and𝑘(3≤𝑛≤106,1≤𝑘≤𝑛−2),themaximumnumberofsidesofapolygonandthenumberofpolygonstoconstruct,respectively.OutputPrintasingleinteger—theminimumnumberofpointsrequiredfor𝑘polygons.Examplesinput62outputinput20050output708NoteInthefirstexample,wehave𝑛=6and𝑘=2.So,wehave4polygonswithnumberofsides3,4,5and6tochoosefromandifwechoosethetriangleandthehexagon,thenwecanarrangethemasshowninthepictureinthestatement.Hence,theminimumnumberofpointsrequiredonthecircleis6,whichisalsotheminimumoverallpossiblesets.题意给你n和k,让你从3~n个点的正多边形中选出k个,使得他们在同一个外接圆的情况下,点数最少。题解简单的思考,如果b是a的因子,那么在同一个外接圆的情况下,已经选了a,再选个b肯定不会多任何一个点的。首先一个,我们定一个圆上的公共点P;那么对于每一个正多变形k在圆上的点分别离P的距离为1/k,2/k,3/k....,k-1/k。那么这道题的答案就是所有的正多边形的不同的分数的个数。在保证选A之前,A的所有因子都已经被选择的情况下,那么答案实际上就是欧拉函数的和。代码#include<bits/stdc++.h>usingnamespacestd;intn,k;constintmaxn=1e6+7;intphi[maxn];voidget_phi(intn){iota(phi,phi+n+1,0);for(inti=2;i<=n;i++){if(phi[i]==i){phi[i]=i-1;for(intj=2*i;j<=n;j+=i){phi[j]=(phi[j]/i)*(i-1);}}}}intmain(){cin>>n>>k;if(k==1){cout<<"3"<<endl;return0;}k=k+2;get_phi(n);sort(phi+1,phi+1+n);cout<<accumulate(phi+1,phi+1+k,0ll)<<endl;}

cs231n Assignment1相关代码

1.KNNknn_nearest_neighbor.pyfrombuiltinsimportrangefrombuiltinsimportobjectimportnumpyasnpfrompast.builtinsimportxrangeclassKNearestNeighbor(object):"""akNNclassifierwithL2distance"""def__init__(self):passdeftrain(self,X,y):"""Traintheclassifier.Fork-nearestneighborsthisisjustmemorizingthetrainingdata.Inputs:-X:Anumpyarrayofshape(num_train,D)containingthetrainingdataconsistingofnum_trainsampleseachofdimensionD.-y:Anumpyarrayofshape(N,)containingthetraininglabels,wherey[i]isthelabelforX[i]."""self.X_train=Xself.y_train=ydefpredict(self,X,k=1,num_loops=0):"""Predictlabelsfortestdatausingthisclassifier.Inputs:-X:Anumpyarrayofshape(num_test,D)containingtestdataconsistingofnum_testsampleseachofdimensionD.-k:Thenumberofnearestneighborsthatvoteforthepredictedlabels.-num_loops:Determineswhichimplementationtousetocomputedistancesbetweentrainingpointsandtestingpoints.Returns:-y:Anumpyarrayofshape(num_test,)containingpredictedlabelsforthetestdata,wherey[i]isthepredictedlabelforthetestpointX[i]."""ifnum_loops==0:dists=self.compute_distances_no_loops(X)elifnum_loops==1:dists=self.compute_distances_one_loop(X)elifnum_loops==2:dists=self.compute_distances_two_loops(X)else:raiseValueError('Invalidvalue%dfornum_loops'%num_loops)returnself.predict_labels(dists,k=k)defcompute_distances_two_loops(self,X):"""ComputethedistancebetweeneachtestpointinXandeachtrainingpointinself.X_trainusinganestedloopoverboththetrainingdataandthetestdata.Inputs:-X:Anumpyarrayofshape(num_test,D)containingtestdata.Returns:-dists:Anumpyarrayofshape(num_test,num_train)wheredists[i,j]istheEuclideandistancebetweentheithtestpointandthejthtrainingpoint."""num_test=X.shape[0]num_train=self.X_train.shape[0]dists=np.zeros((num_test,num_train))foriinrange(num_test):forjinrange(num_train):######################################################################TODO:##Computethel2distancebetweentheithtestpointandthejth##trainingpoint,andstoretheresultindists[i,j].Youshould##notusealoopoverdimension,norusenp.linalg.norm().#######################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****dists[i][j]=np.sqrt(np.sum(np.square(self.X_train[j,:]-X[i,:])))pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returndistsdefcompute_distances_one_loop(self,X):"""ComputethedistancebetweeneachtestpointinXandeachtrainingpointinself.X_trainusingasingleloopoverthetestdata.Input/Output:Sameascompute_distances_two_loops"""num_test=X.shape[0]num_train=self.X_train.shape[0]dists=np.zeros((num_test,num_train))foriinrange(num_test):########################################################################TODO:##Computethel2distancebetweentheithtestpointandalltraining##points,andstoretheresultindists[i,:].##Donotusenp.linalg.norm().#########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****dists[i,:]=np.sqrt(np.sum(np.square(X[i]-self.X_train),axis=1))pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returndistsdefcompute_distances_no_loops(self,X):"""ComputethedistancebetweeneachtestpointinXandeachtrainingpointinself.X_trainusingnoexplicitloops.Input/Output:Sameascompute_distances_two_loops"""num_test=X.shape[0]num_train=self.X_train.shape[0]dists=np.zeros((num_test,num_train))##########################################################################TODO:##Computethel2distancebetweenalltestpointsandalltraining##pointswithoutusinganyexplicitloops,andstoretheresultin##dists.####Youshouldimplementthisfunctionusingonlybasicarrayoperations;##inparticularyoushouldnotusefunctionsfromscipy,##norusenp.linalg.norm().####HINT:Trytoformulatethel2distanceusingmatrixmultiplication##andtwobroadcastsums.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****X_test_squ_array=np.sum(np.square(X),axis=1)X_test_squ=np.tile(X_test_squ_array.reshape(num_test,1),(1,num_train))#printX_test_squ.shapeX_train_squ_array=np.sum(np.square(self.X_train),axis=1)X_train_squ=np.tile(X_train_squ_array,(num_test,1))#printX_train_squ.shapex_te_tr=np.dot(X,self.X_train.T)#printx_te_tr.shapedists=X_test_squ+X_train_squ_array-2*x_te_trpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returndistsdefpredict_labels(self,dists,k=1):"""Givenamatrixofdistancesbetweentestpointsandtrainingpoints,predictalabelforeachtestpoint.Inputs:-dists:Anumpyarrayofshape(num_test,num_train)wheredists[i,j]givesthedistancebetwentheithtestpointandthejthtrainingpoint.Returns:-y:Anumpyarrayofshape(num_test,)containingpredictedlabelsforthetestdata,wherey[i]isthepredictedlabelforthetestpointX[i]."""num_test=dists.shape[0]y_pred=np.zeros(num_test)foriinrange(num_test):#Alistoflengthkstoringthelabelsoftheknearestneighborsto#theithtestpoint.closest_y=[]##########################################################################TODO:##Usethedistancematrixtofindtheknearestneighborsoftheith##testingpoint,anduseself.y_traintofindthelabelsofthese##neighbors.Storetheselabelsinclosest_y.##Hint:Lookupthefunctionnumpy.argsort.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****kids=np.argsort(dists[i])closest_y=self.y_train[kids[:k]]pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****##########################################################################TODO:##Nowthatyouhavefoundthelabelsoftheknearestneighbors,you##needtofindthemostcommonlabelinthelistclosest_yoflabels.##Storethislabeliny_pred[i].Breaktiesbychoosingthesmaller##label.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****count=0label=0forjinclosest_y:tmp=0forkkinclosest_y:tmp+=(kk==j)iftmp>count:count=tmplabel=jy_pred[i]=labelpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returny_predQuestion1InlineQuestion1Noticethestructuredpatternsinthedistancematrix,wheresomerowsorcolumnsarevisiblebrighter.(Notethatwiththedefaultcolorschemeblackindicateslowdistanceswhilewhiteindicateshighdistances.)Whatinthedataisthecausebehindthedistinctlybrightrows?Whatcausesthecolumns?Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:Thebrightrowsmeansthatthetestpictureisdifferentfromallthetrainimages.Andthedifferenceformtraintothetestscausethebrightcolumns.Question2Thegeneralstandarddeviation𝜎andpixel-wisestandarddeviation𝜎𝑖𝑗isdefinedsimilarly.WhichofthefollowingpreprocessingstepswillnotchangetheperformanceofaNearestNeighborclassifierthatusesL1distance?Selectallthatapply.Subtractingthemean𝜇(𝑝̃(𝑘)𝑖𝑗=𝑝(𝑘)𝑖𝑗−𝜇.)Subtractingtheperpixelmean𝜇𝑖𝑗(𝑝̃(𝑘)𝑖𝑗=𝑝(𝑘)𝑖𝑗−𝜇𝑖𝑗.)Subtractingthemean𝜇anddividingbythestandarddeviation𝜎.Subtractingthepixel-wisemean𝜇𝑖𝑗anddividingbythepixel-wisestandarddeviation𝜎𝑖𝑗.Rotatingthecoordinateaxesofthedata.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:1,2,3Y𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:Thechoice1,2and3aretheNormalizedprcessmethods,sotheyareright.AndtheL1isboundtothesetofCoordinateSystemandthechoice5iswrong交叉验证num_folds=5k_choices=[1,3,5,8,10,12,15,20,50,100]X_train_folds=[]y_train_folds=[]#################################################################################TODO:##Splitupthetrainingdataintofolds.Aftersplitting,X_train_foldsand##y_train_foldsshouldeachbelistsoflengthnum_folds,where##y_train_folds[i]isthelabelvectorforthepointsinX_train_folds[i].##Hint:Lookupthenumpyarray_splitfunction.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****X_train_folds=np.split(X_train,5,axis=0)y_train_folds=np.split(y_train,5,axis=0)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Adictionaryholdingtheaccuraciesfordifferentvaluesofkthatwefind#whenrunningcross-validation.Afterrunningcross-validation,#k_to_accuracies[k]shouldbealistoflengthnum_foldsgivingthedifferent#accuracyvaluesthatwefoundwhenusingthatvalueofk.k_to_accuracies={}#################################################################################TODO:##Performk-foldcrossvalidationtofindthebestvalueofk.Foreach##possiblevalueofk,runthek-nearest-neighboralgorithmnum_foldstimes,##whereineachcaseyouuseallbutoneofthefoldsastrainingdataandthe##lastfoldasavalidationset.Storetheaccuraciesforallfoldandall##valuesofkinthek_to_accuraciesdictionary.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forkink_choices:accuracies=[]foriinrange(num_folds):X_test_cv=X_train_folds[i]X_train_cv=np.vstack(X_train_folds[:i]+X_train_folds[i+1:])y_test_cv=y_train_folds[i]y_train_cv=np.hstack(y_train_folds[:i]+y_train_folds[i+1:])classifier.train(X_train_cv,y_train_cv)dists_cv=classifier.compute_distances_no_loops(X_test_cv)y_test_pred=classifier.predict_labels(dists_cv,k)num_correct=np.sum(y_test_pred==y_test_cv)accuracies.append(float(num_correct)*num_folds/num_training)k_to_accuracies[k]=accuraciespass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutthecomputedaccuraciesforkinsorted(k_to_accuracies):foraccuracyink_to_accuracies[k]:print('k=%d,accuracy=%f'%(k,accuracy))Question3Whichofthefollowingstatementsabout𝑘-NearestNeighbor(𝑘-NN)aretrueinaclassificationsetting,andforall𝑘?Selectallthatapply.Thedecisionboundaryofthek-NNclassifierislinear.Thetrainingerrorofa1-NNwillalwaysbelowerthanthatof5-NN.Thetesterrorofa1-NNwillalwaysbelowerthanthatofa5-NN.Thetimeneededtoclassifyatestexamplewiththek-NNclassifiergrowswiththesizeofthetrainingset.Noneoftheabove.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:4Y𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:TheKNNandboundaryarenon-linear,sothe1,2arewrong.Thetesterrorofa1-NNwillnotalwaysbelowerthanthatofa5-NN.SVMlinear_svm.pyfrombuiltinsimportrangeimportnumpyasnpfromrandomimportshufflefrompast.builtinsimportxrangedefsvm_loss_naive(W,X,y,reg):"""StructuredSVMlossfunction,naiveimplementation(withloops).InputshavedimensionD,thereareCclasses,andweoperateonminibatchesofNexamples.Inputs:-W:Anumpyarrayofshape(D,C)containingweights.-X:Anumpyarrayofshape(N,D)containingaminibatchofdata.-y:Anumpyarrayofshape(N,)containingtraininglabels;y[i]=cmeansthatX[i]haslabelc,where0<=c<C.-reg:(float)regularizationstrengthReturnsatupleof:-lossassinglefloat-gradientwithrespecttoweightsW;anarrayofsameshapeasW"""dW=np.zeros(W.shape)#initializethegradientaszero#computethelossandthegradientnum_classes=W.shape[1]num_train=X.shape[0]loss=0.0foriinrange(num_train):scores=X[i].dot(W)correct_class_score=scores[y[i]]forjinrange(num_classes):ifj==y[i]:continuemargin=scores[j]-correct_class_score+1#notedelta=1ifmargin>0:loss+=margindW[:,y[i]]+=-X[i]#对应正确分类的梯度(D,)dW[:,j]+=X[i]#对应不正确分类的梯度#Rightnowthelossisasumoveralltrainingexamples,butwewantit#tobeanaverageinsteadsowedividebynum_train.loss/=num_traindW/=num_train#Addregularizationtotheloss.loss+=reg*np.sum(W*W)dW+=reg*W##############################################################################TODO:##ComputethegradientofthelossfunctionandstoreitdW.##Ratherthanfirstcomputingthelossandthencomputingthederivative,##itmaybesimplertocomputethederivativeatthesametimethatthe##lossisbeingcomputed.Asaresultyoumayneedtomodifysomeofthe##codeabovetocomputethegradient.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWdefsvm_loss_vectorized(W,X,y,reg):"""StructuredSVMlossfunction,vectorizedimplementation.Inputsandoutputsarethesameassvm_loss_naive."""loss=0.0dW=np.zeros(W.shape)#initializethegradientaszero##############################################################################TODO:##ImplementavectorizedversionofthestructuredSVMloss,storingthe##resultinloss.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****num_train=X.shape[0]scores=X.dot(W)margin=scores-scores[np.arange(num_train),y].reshape(num_train,1)+1margin[np.arange(num_train),y]=0.0#正确的这一列不该计算,归零margin=(margin>0)*marginloss+=margin.sum()/num_trainloss+=0.5*reg*np.sum(W*W)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****##############################################################################TODO:##ImplementavectorizedversionofthegradientforthestructuredSVM##loss,storingtheresultindW.####Hint:Insteadofcomputingthegradientfromscratch,itmaybeeasier##toreusesomeoftheintermediatevaluesthatyouusedtocomputethe##loss.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****margin=(margin>0)*1row_sum=np.sum(margin,axis=1)margin[np.arange(num_train),y]=-row_sumdW=X.T.dot(margin)/num_train+reg*Wpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWlinear_classifier.pyfrom__future__importprint_functionfrombuiltinsimportrangefrombuiltinsimportobjectimportnumpyasnpfromcs231n.classifiers.linear_svmimport*fromcs231n.classifiers.softmaximport*frompast.builtinsimportxrangeclassLinearClassifier(object):def__init__(self):self.W=Nonedeftrain(self,X,y,learning_rate=1e-3,reg=1e-5,num_iters=100,batch_size=200,verbose=False):"""Trainthislinearclassifierusingstochasticgradientdescent.Inputs:-X:Anumpyarrayofshape(N,D)containingtrainingdata;thereareNtrainingsampleseachofdimensionD.-y:Anumpyarrayofshape(N,)containingtraininglabels;y[i]=cmeansthatX[i]haslabel0<=c<CforCclasses.-learning_rate:(float)learningrateforoptimization.-reg:(float)regularizationstrength.-num_iters:(integer)numberofstepstotakewhenoptimizing-batch_size:(integer)numberoftrainingexamplestouseateachstep.-verbose:(boolean)Iftrue,printprogressduringoptimization.Outputs:Alistcontainingthevalueofthelossfunctionateachtrainingiteration."""num_train,dim=X.shapenum_classes=np.max(y)+1#assumeytakesvalues0...K-1whereKisnumberofclassesifself.WisNone:#lazilyinitializeWself.W=0.001*np.random.randn(dim,num_classes)#RunstochasticgradientdescenttooptimizeWloss_history=[]foritinrange(num_iters):X_batch=Noney_batch=None##########################################################################TODO:##Samplebatch_sizeelementsfromthetrainingdataandtheir##correspondinglabelstouseinthisroundofgradientdescent.##StorethedatainX_batchandtheircorrespondinglabelsin##y_batch;aftersamplingX_batchshouldhaveshape(batch_size,dim)##andy_batchshouldhaveshape(batch_size,)####Hint:Usenp.random.choicetogenerateindices.Samplingwith##replacementisfasterthansamplingwithoutreplacement.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****mask=np.random.choice(num_train,batch_size,replace=False)#replace=False没有重复X_batch=X[mask]y_batch=y[mask]pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#evaluatelossandgradientloss,grad=self.loss(X_batch,y_batch,reg)loss_history.append(loss)#performparameterupdate##########################################################################TODO:##Updatetheweightsusingthegradientandthelearningrate.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****self.W+=-learning_rate*gradpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****ifverboseandit%100==0:print('iteration%d/%d:loss%f'%(it,num_iters,loss))returnloss_historydefpredict(self,X):"""Usethetrainedweightsofthislinearclassifiertopredictlabelsfordatapoints.Inputs:-X:Anumpyarrayofshape(N,D)containingtrainingdata;thereareNtrainingsampleseachofdimensionD.Returns:-y_pred:PredictedlabelsforthedatainX.y_predisa1-dimensionalarrayoflengthN,andeachelementisanintegergivingthepredictedclass."""y_pred=np.zeros(X.shape[0])############################################################################TODO:##Implementthismethod.Storethepredictedlabelsiny_pred.#############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****score=X.dot(self.W)index=np.zeros(X.shape[0])index=np.argmax(score,axis=1)y_pred=indexpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returny_preddefloss(self,X_batch,y_batch,reg):"""Computethelossfunctionanditsderivative.Subclasseswilloverridethis.Inputs:-X_batch:Anumpyarrayofshape(N,D)containingaminibatchofNdatapoints;eachpointhasdimensionD.-y_batch:Anumpyarrayofshape(N,)containinglabelsfortheminibatch.-reg:(float)regularizationstrength.Returns:Atuplecontaining:-lossasasinglefloat-gradientwithrespecttoself.W;anarrayofthesameshapeasW"""passclassLinearSVM(LinearClassifier):"""AsubclassthatusestheMulticlassSVMlossfunction"""defloss(self,X_batch,y_batch,reg):returnsvm_loss_vectorized(self.W,X_batch,y_batch,reg)classSoftmax(LinearClassifier):"""AsubclassthatusestheSoftmax+Cross-entropylossfunction"""defloss(self,X_batch,y_batch,reg):returnsoftmax_loss_vectorized(self.W,X_batch,y_batch,reg)补充代码#Usethevalidationsettotunehyperparameters(regularizationstrengthand#learningrate).Youshouldexperimentwithdifferentrangesforthelearning#ratesandregularizationstrengths;ifyouarecarefulyoushouldbeableto#getaclassificationaccuracyofabout0.39onthevalidationset.#Note:youmayseeruntime/overflowwarningsduringhyper-parametersearch.#Thismaybecausedbyextremevalues,andisnotabug.#resultsisdictionarymappingtuplesoftheform#(learning_rate,regularization_strength)totuplesoftheform#(training_accuracy,validation_accuracy).Theaccuracyissimplythefraction#ofdatapointsthatarecorrectlyclassified.results={}best_val=-1#Thehighestvalidationaccuracythatwehaveseensofar.best_svm=None#TheLinearSVMobjectthatachievedthehighestvalidationrate.#################################################################################TODO:##Writecodethatchoosesthebesthyperparametersbytuningonthevalidation##set.Foreachcombinationofhyperparameters,trainalinearSVMonthe##trainingset,computeitsaccuracyonthetrainingandvalidationsets,and##storethesenumbersintheresultsdictionary.Inaddition,storethebest##validationaccuracyinbest_valandtheLinearSVMobjectthatachievesthis##accuracyinbest_svm.####Hint:Youshoulduseasmallvaluefornum_itersasyoudevelopyour##validationcodesothattheSVMsdon'ttakemuchtimetotrain;onceyouare##confidentthatyourvalidationcodeworks,youshouldrerunthevalidation##codewithalargervaluefornum_iters.##################################################################################Providedasareference.Youmayormaynotwanttochangethesehyperparameterslearning_rates=[1e-7,5e-5]regularization_strengths=[2.5e4,5e4]#*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****iters=2000#100forlrinlearning_rates:forrsinregularization_strengths:svm=LinearSVM()svm.train(X_train,y_train,learning_rate=lr,reg=rs,num_iters=iters)y_train_pred=svm.predict(X_train)acc_train=np.mean(y_train==y_train_pred)y_val_pred=svm.predict(X_val)acc_val=np.mean(y_val==y_val_pred)results[(lr,rs)]=(acc_train,acc_val)ifbest_val<acc_val:best_val=acc_valbest_svm=svmpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutresults.forlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)Question2Inlinequestion2DescribewhatyourvisualizedSVMweightslooklike,andofferabriefexplanationforwhytheylooktheywaythattheydo.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:Theylooklikecorrespodingpicturebutinblurry.SotheweightsarethetemplateineachclassSoftMaxsoftmax.pyfrombuiltinsimportrangeimportnumpyasnpfromrandomimportshufflefrompast.builtinsimportxrangedefsoftmax_loss_naive(W,X,y,reg):"""Softmaxlossfunction,naiveimplementation(withloops)InputshavedimensionD,thereareCclasses,andweoperateonminibatchesofNexamples.Inputs:-W:Anumpyarrayofshape(D,C)containingweights.-X:Anumpyarrayofshape(N,D)containingaminibatchofdata.-y:Anumpyarrayofshape(N,)containingtraininglabels;y[i]=cmeansthatX[i]haslabelc,where0<=c<C.-reg:(float)regularizationstrengthReturnsatupleof:-lossassinglefloat-gradientwithrespecttoweightsW;anarrayofsameshapeasW"""#Initializethelossandgradienttozero.loss=0.0dW=np.zeros_like(W)##############################################################################TODO:Computethesoftmaxlossanditsgradientusingexplicitloops.##StorethelossinlossandthegradientindW.Ifyouarenotcareful##here,itiseasytorunintonumericinstability.Don'tforgetthe##regularization!###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****num_classes=W.shape[1]num_train=X.shape[0]foriinrange(num_train):scores=X[i].dot(W)scores=scores-np.max(scores)scores_exp=np.exp(scores)#指数操作ds_w=np.repeat(X[i],num_classes).reshape(-1,num_classes)#计算得分对权重的倒数scores_exp_sum=np.sum(scores_exp)pk=scores_exp[y[i]]/scores_exp_sumloss+=-np.log(pk)dl_s=np.zeros(W.shape)#开始计算loss对得分的倒数forjinrange(num_classes):ifj==y[i]:dl_s[:,j]=pk-1#对于输入正确分类的那一项,倒数与其他不同else:dl_s[:,j]=scores_exp[j]/scores_exp_sumdW_i=ds_w*dl_sdW+=dW_iloss/=num_traindW/=num_trainloss+=reg*np.sum(W*W)dW+=W*2*reg#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWdefsoftmax_loss_vectorized(W,X,y,reg):"""Softmaxlossfunction,vectorizedversion.Inputsandoutputsarethesameassoftmax_loss_naive."""#Initializethelossandgradienttozero.loss=0.0dW=np.zeros_like(W)##############################################################################TODO:Computethesoftmaxlossanditsgradientusingnoexplicitloops.##StorethelossinlossandthegradientindW.Ifyouarenotcareful##here,itiseasytorunintonumericinstability.Don'tforgetthe##regularization!###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****num_classes=W.shape[1]num_train=X.shape[0]scores=X.dot(W)scores=scores-np.max(scores,1,keepdims=True)scores_exp=np.exp(scores)sum_s=np.sum(scores_exp,1,keepdims=True)p=scores_exp/sum_sloss=np.sum(-np.log(p[np.arange(num_train),y]))ind=np.zeros_like(p)ind[np.arange(num_train),y]=1dW=X.T.dot(p-ind)loss/=num_traindW/=num_trainloss+=reg*np.sum(W*W)dW+=W*2*regpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,dWQuestion1Whydoweexpectourlosstobecloseto-log(0.1)?Explainbriefly.**Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:-log(1/C)=-log0.1补充#Usethevalidationsettotunehyperparameters(regularizationstrengthand#learningrate).Youshouldexperimentwithdifferentrangesforthelearning#ratesandregularizationstrengths;ifyouarecarefulyoushouldbeableto#getaclassificationaccuracyofover0.35onthevalidationset.fromcs231n.classifiersimportSoftmaxresults={}best_val=-1best_softmax=None#################################################################################TODO:##Usethevalidationsettosetthelearningrateandregularizationstrength.##ThisshouldbeidenticaltothevalidationthatyoudidfortheSVM;save##thebesttrainedsoftmaxclassiferinbest_softmax.##################################################################################Providedasareference.Youmayormaynotwanttochangethesehyperparameterslearning_rates=[1e-7,5e-7]regularization_strengths=[2.5e4,5e4]#*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forlrinlearning_rates:forreginregularization_strengths:softmax=Softmax()loss_hist=softmax.train(X_train,y_train,lr,reg,num_iters=500,verbose=True)y_train_pred=softmax.predict(X_train)acc_tr=np.mean(y_train==y_train_pred)y_val_pred=softmax.predict(X_val)acc_val=np.mean(y_val==y_val_pred)results[(lr,reg)]=(acc_tr,acc_val)ifbest_val<acc_val:best_val=acc_valbest_softmax=softmaxpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutresults.forlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)Question2InlineQuestion2-TrueorFalseSupposetheoveralltraininglossisdefinedasthesumoftheper-datapointlossoveralltrainingexamples.ItispossibletoaddanewdatapointtoatrainingsetthatwouldleavetheSVMlossunchanged,butthisisnotthecasewiththeSoftmaxclassifierloss.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:TrueY𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:SVMisokonlyifthetrainingsetisenoughhuge.AndtheSoftmax'strainingsetamounthasnolimitationtwo_layer_netneural_net.pyfrom__future__importprint_functionfrombuiltinsimportrangefrombuiltinsimportobjectimportnumpyasnpimportmatplotlib.pyplotaspltfrompast.builtinsimportxrangeclassTwoLayerNet(object):"""Atwo-layerfully-connectedneuralnetwork.ThenethasaninputdimensionofN,ahiddenlayerdimensionofH,andperformsclassificationoverCclasses.WetrainthenetworkwithasoftmaxlossfunctionandL2regularizationontheweightmatrices.ThenetworkusesaReLUnonlinearityafterthefirstfullyconnectedlayer.Inotherwords,thenetworkhasthefollowingarchitecture:input-fullyconnectedlayer-ReLU-fullyconnectedlayer-softmaxTheoutputsofthesecondfully-connectedlayerarethescoresforeachclass."""def__init__(self,input_size,hidden_size,output_size,std=1e-4):"""Initializethemodel.Weightsareinitializedtosmallrandomvaluesandbiasesareinitializedtozero.Weightsandbiasesarestoredinthevariableself.params,whichisadictionarywiththefollowingkeys:W1:Firstlayerweights;hasshape(D,H)b1:Firstlayerbiases;hasshape(H,)W2:Secondlayerweights;hasshape(H,C)b2:Secondlayerbiases;hasshape(C,)Inputs:-input_size:ThedimensionDoftheinputdata.-hidden_size:ThenumberofneuronsHinthehiddenlayer.-output_size:ThenumberofclassesC."""self.params={}self.params['W1']=std*np.random.randn(input_size,hidden_size)self.params['b1']=np.zeros(hidden_size)self.params['W2']=std*np.random.randn(hidden_size,output_size)self.params['b2']=np.zeros(output_size)defloss(self,X,y=None,reg=0.0):"""Computethelossandgradientsforatwolayerfullyconnectedneuralnetwork.Inputs:-X:Inputdataofshape(N,D).EachX[i]isatrainingsample.-y:Vectoroftraininglabels.y[i]isthelabelforX[i],andeachy[i]isanintegerintherange0<=y[i]<C.Thisparameterisoptional;ifitisnotpassedthenweonlyreturnscores,andifitispassedthenweinsteadreturnthelossandgradients.-reg:Regularizationstrength.Returns:IfyisNone,returnamatrixscoresofshape(N,C)wherescores[i,c]isthescoreforclassconinputX[i].IfyisnotNone,insteadreturnatupleof:-loss:Loss(datalossandregularizationloss)forthisbatchoftrainingsamples.-grads:Dictionarymappingparameternamestogradientsofthoseparameterswithrespecttothelossfunction;hasthesamekeysasself.params."""#UnpackvariablesfromtheparamsdictionaryW1,b1=self.params['W1'],self.params['b1']W2,b2=self.params['W2'],self.params['b2']N,D=X.shape#Computetheforwardpassscores=None##############################################################################TODO:Performtheforwardpass,computingtheclassscoresfortheinput.##Storetheresultinthescoresvariable,whichshouldbeanarrayof##shape(N,C).###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****Z1=X.dot(W1)+b1A1=np.maximum(0,Z1)scores=A1.dot(W2)+b2pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Ifthetargetsarenotgiventhenjumpout,we'redoneifyisNone:returnscores#Computethelossloss=None##############################################################################TODO:Finishtheforwardpass,andcomputetheloss.Thisshouldinclude##boththedatalossandL2regularizationforW1andW2.Storetheresult##inthevariableloss,whichshouldbeascalar.UsetheSoftmax##classifierloss.###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****scores-=np.max(scores,axis=1,keepdims=True)exp_scores=np.exp(scores)probs=exp_scores/np.sum(exp_scores,axis=1,keepdims=True)y_label=np.zeros((N,probs.shape[1]))y_label[np.arange(N),y]=1loss=(-1)*np.sum(np.multiply(np.log(probs),y_label))/Nloss+=reg*(np.sum(W1*W1)+np.sum(W2*W2))pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Backwardpass:computegradientsgrads={}##############################################################################TODO:Computethebackwardpass,computingthederivativesoftheweights##andbiases.Storetheresultsinthegradsdictionary.Forexample,##grads['W1']shouldstorethegradientonW1,andbeamatrixofsamesize###############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****dZ2=probs-y_labeldW2=A1.T.dot(dZ2)dW2/=NdW2+=2*reg*W2db2=np.sum(dZ2,axis=0)/NdZ1=(dZ2).dot(W2.T)*(A1>0)dW1=X.T.dot(dZ1)/N+2*reg*W1db1=np.sum(dZ1,axis=0)/Ngrads['W2']=dW2grads['b2']=db2grads['W1']=dW1grads['b1']=db1pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returnloss,gradsdeftrain(self,X,y,X_val,y_val,learning_rate=1e-3,learning_rate_decay=0.95,reg=5e-6,num_iters=100,batch_size=200,verbose=False):"""Trainthisneuralnetworkusingstochasticgradientdescent.Inputs:-X:Anumpyarrayofshape(N,D)givingtrainingdata.-y:Anumpyarrayfshape(N,)givingtraininglabels;y[i]=cmeansthatX[i]haslabelc,where0<=c<C.-X_val:Anumpyarrayofshape(N_val,D)givingvalidationdata.-y_val:Anumpyarrayofshape(N_val,)givingvalidationlabels.-learning_rate:Scalargivinglearningrateforoptimization.-learning_rate_decay:Scalargivingfactorusedtodecaythelearningrateaftereachepoch.-reg:Scalargivingregularizationstrength.-num_iters:Numberofstepstotakewhenoptimizing.-batch_size:Numberoftrainingexamplestouseperstep.-verbose:boolean;iftrueprintprogressduringoptimization."""num_train=X.shape[0]iterations_per_epoch=max(num_train/batch_size,1)#UseSGDtooptimizetheparametersinself.modelloss_history=[]train_acc_history=[]val_acc_history=[]foritinrange(num_iters):X_batch=Noney_batch=None##########################################################################TODO:Createarandomminibatchoftrainingdataandlabels,storing##theminX_batchandy_batchrespectively.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****batch_inx=np.random.choice(num_train,batch_size)X_batch=X[batch_inx,:]y_batch=y[batch_inx]pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Computelossandgradientsusingthecurrentminibatchloss,grads=self.loss(X_batch,y=y_batch,reg=reg)loss_history.append(loss)##########################################################################TODO:Usethegradientsinthegradsdictionarytoupdatethe##parametersofthenetwork(storedinthedictionaryself.params)##usingstochasticgradientdescent.You'llneedtousethegradients##storedinthegradsdictionarydefinedabove.###########################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****self.params['W1']-=learning_rate*grads['W1']self.params['b1']-=learning_rate*grads['b1']self.params['W2']-=learning_rate*grads['W2']self.params['b2']-=learning_rate*grads['b2']pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****ifverboseandit%100==0:print('iteration%d/%d:loss%f'%(it,num_iters,loss))#Everyepoch,checktrainandvalaccuracyanddecaylearningrate.ifit%iterations_per_epoch==0:#Checkaccuracytrain_acc=(self.predict(X_batch)==y_batch).mean()val_acc=(self.predict(X_val)==y_val).mean()train_acc_history.append(train_acc)val_acc_history.append(val_acc)#Decaylearningratelearning_rate*=learning_rate_decayreturn{'loss_history':loss_history,'train_acc_history':train_acc_history,'val_acc_history':val_acc_history,}defpredict(self,X):"""Usethetrainedweightsofthistwo-layernetworktopredictlabelsfordatapoints.ForeachdatapointwepredictscoresforeachoftheCclasses,andassigneachdatapointtotheclasswiththehighestscore.Inputs:-X:Anumpyarrayofshape(N,D)givingND-dimensionaldatapointstoclassify.Returns:-y_pred:Anumpyarrayofshape(N,)givingpredictedlabelsforeachoftheelementsofX.Foralli,y_pred[i]=cmeansthatX[i]ispredictedtohaveclassc,where0<=c<C."""y_pred=None############################################################################TODO:Implementthisfunction;itshouldbeVERYsimple!#############################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****score=self.loss(X)y_pred=np.argmax(score,axis=1)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****returny_pred补充best_net=None#storethebestmodelintothisresults={}best_val=-1learning_rates=[1.2e-3,1.5e-3,1.75e-3]regularization_strengths=[1,1.25,1.5,2]##################################################################################TODO:Tunehyperparametersusingthevalidationset.Storeyourbesttrained##modelinbest_net.####Tohelpdebugyournetwork,itmayhelptousevisualizationssimilartothe##onesweusedabove;thesevisualizationswillhavesignificantqualitative##differencesfromtheoneswesawaboveforthepoorlytunednetwork.####Tweakinghyperparametersbyhandcanbefun,butyoumightfinditusefulto##writecodetosweepthroughpossiblecombinationsofhyperparameters##automaticallylikewedidonthepreviousexercises.###################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forlrinlearning_rates:forreginregularization_strengths:net=TwoLayerNet(input_size,hidden_size,num_classes)loss_hist=net.train(X_train,y_train,X_val,y_val,num_iters=1000,batch_size=200,learning_rate=lr,learning_rate_decay=0.95,reg=reg,verbose=False)y_train_pred=net.predict(X_train)y_val_pred=net.predict(X_val)y_train_acc=np.mean(y_train_pred==y_train)y_val_acc=np.mean(y_val_pred==y_val)results[(lr,reg)]=[y_train_acc,y_val_acc]ify_val_acc>best_val:best_val=y_val_accbest_net=netforlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****QuestionNowthatyouhavetrainedaNeuralNetworkclassifier,youmayfindthatyourtestingaccuracyismuchlowerthanthetrainingaccuracy.Inwhatwayscanwedecreasethisgap?Selectallthatapply.Trainonalargerdataset.Addmorehiddenunits.Increasetheregularizationstrength.Noneoftheabove.Y𝑜𝑢𝑟𝐴𝑛𝑠𝑤𝑒𝑟:1,3Y𝑜𝑢𝑟𝐸𝑥𝑝𝑙𝑎𝑛𝑎𝑡𝑖𝑜𝑛:Thelargerdatasetandregularizationstrengthwillenhancetheaccuracyeffeciently5.​feature补充1#Usethevalidationsettotunethelearningrateandregularizationstrengthfromcs231n.classifiers.linear_classifierimportLinearSVMlearning_rates=[1e-9,1e-8,1e-7]regularization_strengths=[5e4,5e5,5e6]results={}best_val=-1best_svm=None#################################################################################TODO:##Usethevalidationsettosetthelearningrateandregularizationstrength.##ThisshouldbeidenticaltothevalidationthatyoudidfortheSVM;save##thebesttrainedclassiferinbest_svm.Youmightalsowanttoplay##withdifferentnumbersofbinsinthecolorhistogram.Ifyouarecareful##youshouldbeabletogetaccuracyofnear0.44onthevalidationset.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****forlrinlearning_rates:forreginregularization_strengths:svm=LinearSVM()loss_hist=svm.train(X_train_feats,y_train,learning_rate=lr,reg=reg,num_iters=1500,verbose=True)y_train_pred=svm.predict(X_train_feats)y_val_pred=svm.predict(X_val_feats)y_train_acc=np.mean(y_train_pred==y_train)y_val_acc=np.mean(y_val_pred==y_val)results[(lr,reg)]=[y_train_acc,y_val_acc]ify_val_acc>best_val:best_val=y_val_accbest_svm=svmpass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****#Printoutresults.forlr,reginsorted(results):train_accuracy,val_accuracy=results[(lr,reg)]print('lr%ereg%etrainaccuracy:%fvalaccuracy:%f'%(lr,reg,train_accuracy,val_accuracy))print('bestvalidationaccuracyachievedduringcross-validation:%f'%best_val)补充2fromcs231n.classifiers.neural_netimportTwoLayerNetinput_dim=X_train_feats.shape[1]hidden_dim=500num_classes=10net=TwoLayerNet(input_dim,hidden_dim,num_classes)best_net=None#################################################################################TODO:Trainatwo-layerneuralnetworkonimagefeatures.Youmaywantto##cross-validatevariousparametersasinprevioussections.Storeyourbest##modelinthebest_netvariable.##################################################################################*****STARTOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****best_acc=-1learning_rate=[1e-2,1e-1,5e-1]regulations=[5e-3,1e-2,1e-1,0.5]forlrinlearning_rate:forreginregulations:stats=net.train(X_train_feats,y_train,X_val_feats,y_val,num_iters=1000,batch_size=200,learning_rate=lr,learning_rate_decay=0.95,reg=reg,verbose=True)val_acc=(net.predict(X_val_feats)==y_val).mean()ifval_acc>best_acc:best_acc=val_accbest_net=netprint('lr=',lr,'reg=',reg,'acc=',best_acc)print('best_acc:',best_acc)pass#*****ENDOFYOURCODE(DONOTDELETE/MODIFYTHISLINE)*****

【Markdown】之使用教程

Markdown教程https://testerhome.com/markdownGuide这是一篇讲解如何正确使用Markdown的排版示例,学会这个很有必要,能让你的文章有更佳清晰的排版。引用文本:Markdownisatextformattingsyntaxinspired语法指导普通内容这段内容展示了在内容里面一些小的格式,比如:加粗-**加粗**倾斜-*倾斜*删除线-~~删除线~~Code标记-\Code标记``超级链接-[超级链接](http://github.com)username@gmail.com-[username@gmail.com](mailto:username@gmail.com)提及用户@foo@bar@someone...通过@可以在发帖和回帖里面提及用户,信息提交以后,被提及的用户将会收到系统通知。以便让他来关注这个帖子或回帖。表情符号Emoji支持表情符号,你可以用系统默认的Emoji符号(无法支持Windows用户)。也可以用图片的表情,输入:将会出现智能提示。一些表情例子😄😆😵😭😰😅😢😤😍☺️😎😩👍👎💯👏🔔🎁❓💣❤️☕🌀🙇💋🙏💦💩❗💢大标题-Heading3你可以选择使用H2至H6,使用##(N)打头,H1不能使用,会自动转换成H2。NOTE:别忘了#后面需要有空格!Heading4Heading5Heading6图片![alt文本](http://image-path.png)![alt文本](http://image-path.png"图片Title值")![设置图片宽度高度](http://image-path.png=300x200)![设置图片宽度](http://image-path.png=300x)![设置图片高度](http://image-path.png=x200)代码块普通*emphasize***strong**_emphasize___strong__@a=1语法高亮支持如果在```后面更随语言名称,可以有语法高亮的效果哦,比如:演示Ruby代码高亮classPostController<ApplicationControllerdefindex@posts=Post.last_actived.limit(10)endend演示RailsView高亮<%=@posts.eachdo|post|%><divclass="post"></div><%end%>演示YAML文件zh-CN:name:姓名age:年龄Tip:语言名称支持下面这些:ruby,python,js,html,erb,css,coffee,bash,json,yml,xml...有序、无序列表无序列表RubyRailsActiveRecordGoGofmtRevelNode.jsKoaExpress有序列表Node.jsExpressKoaSailsRubyRailsSinatraGo表格如果需要展示数据什么的,可以选择使用表格哦header1header3cell1cell2cell3cell4cell5cell6段落留空白的换行,将会被自动转换成一个段落,会有一定的段落间距,便于阅读。请注意后面Markdown源代码的换行留空情况。视频插入目前支持Youtube和Youku两家的视频插入,你只需要复制视频播放页面,浏览器地址栏的网页URL地址,并粘贴到话题/回复文本框,提交以后将自动转换成视频播放器。例如Youtubehttps://www.youtube.com/watch?v=CvVvwh3BRq8Vimeohttps://vimeo.com/199770305Youkuhttp://v.youku.com/v_show/id_XMjQzMTY1MDk3Ng==.html···字体颜色浅红色文字:浅红色文字:深红色文字:深红色文字浅绿色文字:浅绿色文字深绿色文字:深绿色文字浅蓝色文字:浅蓝色文字深蓝色文字:深蓝色文字浅黄色文字:浅黄色文字深黄色文字:深黄色文字浅青色文字:浅青色文字深青色文字:深青色文字浅紫色文字:浅紫色文字深紫色文字:深紫色文字

1 2 3 4 5 6 7 8 9 10 下一页