科学上网搭建【teddysun-SSR】
f0176fae5712eec34aa380b6e4692066772b9852225cac5b44904510629ebab015a13f34ce768a388d5e8bfc0ee76f62670253c2f6036dd448f70033f0e0ce1576746b2226a1e01ab80aa3ca9d6edd5180bac50205cc0f46c6945326a1a0534a33a967192d152402e1c0f3110a3a72793f18994be1fb3f93aa725d08f5057f868b44f598f9cf2322cf48b1df31017e906472288617fbda48bc5df4cc6e4f52e5eff6f9dca2c6edee2f935e8bce4aef7d3a68caff6b0ea65f1eb1367c9bcd4877c76698895f0191e6f587ff608fd94ba0d0723ba58dbcd5b6d2242c44a8421f8a6e676a7dd455d3e0b5c47413ed3ef388ef4a3c0beac3b3641 ...
科学上网客户端工具
49c8a9faf4f0fc7a64ffff311c29b075e0e343f05860cd86ebc56d319043f7e2e544bcb0fc6f414bd1deec48941da5b90bc5cf47626156e4f77724850cbf620fbde0fda8ed0ec898a3b0e632e978f88c9de4b68464447cb0d6d82da16ca2e0030a19dd573f7c33c6a2f99ab3729978ffe88c6077c464b98304a7efc0a320e53d38674a9111dddc72eb4bb08f647cd34b46f4c3ef6ddcab6e3549f688561be8118fe939f3f33b96e7105199b41f51ce03877dbbf50c1d1bd5416f97734526a1353619d8a32fe4ca04a7dc48200841922a8743ff48296bcb55ac56af0de4af28195068c530ae2cdba37dba04f87198c7cafda9989216c1c9f2d ...
服务器安装科学上网的脚本
f0176fae5712eec34aa380b6e4692066900762b7f98090ff8aa14778ecfcbb115ad0a88cea8703716e01cf62cc87a3cf46b6c43244ca5743492edf48b993c0fb31f239cd39a23ea04ac442617d9d5004e03d4aae02d6e6c216e59807831c53f95c0b65c4121c56ca44de4747cdc01512465c379589834da4fe784e28e673c882257529d97a9c9e227c74a9905dae3bd79142e384142497336d667f47daba2283411083c9564e6fd30ce8af59880e9fdb3e5055ca6a51f64bf5f96ebb2549b6c5610d4e96adba7f3c23143c7718531333b8a9c33df16ab9c5f7b3d4388e988a375b25074c281d5fef87eacb0f3505dc2ed21153e28c6c39956 ...
如何正确的科学上网
f0176fae5712eec34aa380b6e4692066900762b7f98090ff8aa14778ecfcbb1179affa3ec7c22c015c135232a8117de2e9abfb39f0fa04819fda052e02f558d1b44fe421b9c77217739aa7e70ca0224933a8edcba62c9324709548f69b490a9169a71a9a5c28e943b0b30694e123581a631f496b3a1ef36309daa02abe6c7792fd30c76fcf0a07a73bbb1978bd287809f9f4bdfbd57808e4aac079c202ce994b20255065221d6588436c739eb7d5f1add97b7c09dd5ad2801f5f61d971b138a3148202baf9189e53024cc54af42e62f22ffaedda44ff64ec1fd7f5872229a7a80f182c84a8ca6a7043e1e89f66b906f72c71e092a3c66949a ...
SpringBoot Maven项目调用第三方接口获取值
使用spring的restTemplate方式
引入依赖1234<dependency> <groupId>org.springframework.boot</groupId> <artifactId>spring-boot-starter-web</artifactId></dependency>
HttpClient工具类1234567891011121314151617181920212223242526272829import org.springframework.beans.factory.annotation.Autowired;import org.springframework.stereotype.Component;import org.springframework.web.client.RestTemplate;import java.util.Map; @Componentpublic class HttpClient { @Autowired private ...
Python中使用Requests爬虫实现赶集网数据提取
前言之前两篇request文章,爬取的是文章固定标签id,唯一值这里我通过爬取赶集网上的找房,爬了一点数据,主体上和爬小说是差不多的。
代码1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859# coding:utf-8import requestsfrom lxml import etreeimport pymysql # 获取网页源代码url = 'http://sh.ganji.com/zufang/'req = requests.get(url)selector = etree.HTML(req.content)# link链接link = selector.xpath('//*[@class="f-list-item ershoufang-list"]/dl/dd/a/@href')# 标题title = selector.xpath('// ...
Python中使用Requests爬虫实现网页源码提取
获取开发者模式(F12)下的网页源码两种引入request的方式,最终再通过request获取网页所以源码
第一种:引入request方式引入request方式,from urllib import request
123456789101112from urllib import request # 网站网址url = "http://sh.ganji.com/zufang/"# 打开URLreq = request.urlopen(url)# 获取URLhtml = req.read() # 解码成utf-8格式html = html.decode("utf-8")# 输出源码print(html)
第二种:引入request方式引入request方式,import urllib.request
12345678910import urllib.request # 网站网址url = "http://sh.ganji.com/zufang/"# 打开URLreq = urllib.request.urlopen(u ...
Java中如何使用Jsoup提取本地HTML页面的标签内容
引入Maven依赖
12345<dependency> <groupId>org.jsoup</groupId> <artifactId>jsoup</artifactId> <version>1.10.2</version></dependency>
代码123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354import java.io.BufferedReader;import java.io.FileReader;import java.io.IOException;import org.jsoup.Jsoup;import org.jsoup.nodes.Document;import org.jsoup.select.Elements; public class JsoupTest { /** * 读HTML文件 * ...
Linux下升级安装python3.8,并配置pip及yum
前言博主用的阿里云服务器的CentOS 7,自带的python版本是python-2.7.5,需要再安装一个 python-3.8.1
所以如何在需要在不删除python-2.7.5的情况下,在安装一个python3.8.1版本的,
1python -V
安装Python3.8.1进入Python官网进行下载合适的python,Python官网下载地址
12345678910# 解压tar -zxf Python-3.8.1.tgz# 安装依赖包yum install zlib-devel bzip2-devel openssl-devel ncurses-devel sqlite-devel readline-devel tk-devel gcc libffi-devel# 进入python目录cd Python-3.8.1# 编译./configure --prefix=/usr/local/python3# 安装make && make install
将系统默认的python备份
我这里之前自带的就是python2.7.5版本,为了避免文件重名,所以我直 ...
如何通过自然语言处理(NLP)实现文章摘要提取
前言为了方便使用,我这里只是整理了网上的几种提取摘要的的使用方法,不做任何代码解析。这几种方法我都成功测试过了,但是提取出来数据是有差异的,这里建议这几种方法对比参考后再使用。
Java,使用Classifier4J支持英文提取,不支持中文提取使用该方法,需要引入classifier4J.jar
Classifier4J-0.6.zip
123456789101112import net.sf.classifier4J.summariser.ISummariser;import net.sf.classifier4J.summariser.SimpleSummariser; public class Classifier4J { public static void main1(String[] args) { String str= "Here is the content of the article"; //SimpleSummariser s = new ...