前言
课程设计,要求如下
流程
基本思路,直接插入mysql,然后查询mysql即可
前端采用几个按钮切换功能及显示,直接部署到cloudflare page
上,这样也可以直接用cloudflare D1数据库
数据集来源
分析数据集格式及建表
数据格式如下
书名 | 作者 | 出版社 | 关键词 | 摘要 | 中国图书分类号 | 出版年月 |
---|
写成mysql
语句, 建表
CREATE DATABASE booksmanager;
use booksmanager;
CREATE TABLE books (
name VARCHAR(200) NOT NULL,
author VARCHAR(8000) NOT NULL,
publisher VARCHAR(200),
keyword VARCHAR(200),
excerpt TEXT,
category_num VARCHAR(100),
publish_date VARCHAR(100)
);
但这里先不要直接链接mysql建表,用python先预处理数据,分析长度以后再建表,然后插入数据库,python示例如下
# @Time : 2024/12/7 15:06
# @Author : TwoOnefour
# @blog : https://www.pursuecode.cn
# @Email : twoonefour@pursuecode.cn
# @File : importdata.py
from mysqlhelper import SQLHelper
import csv
db = SQLHelper()
data = []
len_data = [0] * 7
with open('中文图书数据集.csv', newline='', encoding="utf-8") as csvfile:
reader = csv.reader(csvfile)
# Iterate through the rows
for row in reader:
now = []
for index, col in enumerate(row):
if "null" in col:
now.append("-1")
else:
now.append(col)
len_data[index] = max(len_data[index], len(col))
# 得到数据中最长的列的长度
data.append(now)
db.create(f"CREATE TABLE books (name VARCHAR({len_data[0] * 2}), author VARCHAR({len_data[1] * 2}), publisher VARCHAR({len_data[2] * 2}), keyword VARCHAR({len_data[3] * 2}), excerpt VARCHAR({len_data[4] * 2}), category_num VARCHAR({len_data[5] * 2}), publish_date VARCHAR({len_data[6] * 2}))")
db.bulk_modify("insert into books (name, author, publisher, keyword, excerpt, category_num, publish_date) values (%s, %s, %s, %s, %s, %s, %s)", data[1:])
至此建表结束
写查询语句
根据题意写出mysql语句完成逻辑即可
获取第i本图书
select * from books limit i, 1
需要注意的是索引从0开始
查找一本图书
简单实现即可,这里要细化还是比较难的
输入:
- 书名(必须)
- 作者(可选)
- 出版社(可选)
输出:
为了优化sql语句,不需要全部都输出,只输出名称、作者、出版社、发行日期
模糊
select name,author,publisher,publish_date from books where name like "%道家文化与中医学%";
精确
select name,author,publisher,publish_date from books where name = "道家文化与中医学";
插入图书
insert into books (name, author, publisher, keyword, excerpt, category_num, publish_date) values (%s, %s, %s, %s, %s, %s, %s)", (你的数据1))
删除图书
模糊
select name,author,publisher,publish_date from books where name like "%道家文化与中医学%";
精确
select name,author,publisher,publish_date from books where name = "道家文化与中医学";
查询以后删除,删除只会有精确删除
delete from books where name = "道家文化与中医学";
输出所有图书
这就不说了
网页实现
大概长这样
<!DOCTYPE html>
<html>
<head>
<title>图书管理系统</title>
<style>
body {
font-family: sans-serif;
}
.container {
width: 300px;
margin: 0 auto;
text-align: center;
padding-top: 50px;
}
button {
margin-top: 20px;
padding: 10px 20px;
font-size: 16px;
background-color: #4CAF50;
color: white;
border: none;
cursor: pointer;
}
</style>
</head>
<body>
<div class="container">
<h1>图书管理系统</h1>
<button onclick="window.location.href='index_book.html'">按书索引</button>
<button onclick="window.location.href='find_book.html'">查</button>
<button onclick="window.location.href='delete_book.html'">删</button>
<button onclick="window.location.href='insert_book.html'">插入</button>
<button onclick="window.location.href='all_books.html'">所有图书</button>
</div>
</body>
</html>
导入数据库到cloudflare d1
先导出sql文件
>mysqldump -u twoonefour -p booksmanager > booksmanager.sql
注意,这里sql数据库和d1不兼容,不要使用这个sql文件导入
手动在刚才导入数据库的py文件里写入
with open("data.sql", "w", encoding="utf-8") as f:
sql1 = f"CREATE TABLE books (name VARCHAR({len_data[0] * 2}), author VARCHAR({len_data[1] * 2}), publisher VARCHAR({len_data[2] * 2}), keyword VARCHAR({len_data[3] * 2}), excerpt VARCHAR({len_data[4] * 2}), category_num VARCHAR({len_data[5] * 2}), publish_date VARCHAR({len_data[6] * 2}))"
sqls = [sql1]
f.write(sql1 + "\n")
for i in data[1:]:
s = "insert into books (name, author, publisher, keyword, excerpt, category_num, publish_date) values ('%s', '%s', '%s', '%s', '%s', '%s', '%s')" % tuple(i)
sqls.append(s)
f.write(s + "\n")
创建数据库
npx wrangler d1 create booksmanager --remote
导入数据
npx wrangler d1 execute booksmanager --remote --file=booksmanager.sql
写入wrangler.toml
[[d1_databases]]
binding = "DB"
database_name = "booksmanager"
database_id = "xxx1-4139-411a-a4da-44650d1exxc"
根据上面sql语句创建index.js用于worker后端
后端worker设计
export default {
async fetch(request, env) {
const { pathname, searchParams } = new URL(request.url);
let headers = {
'Access-Control-Allow-Origin': '*', // Or your specific origin
'content-type': 'application/json;charset=UTF-8',
}
let dbresults = [];
if (pathname === "/api/select_all") {
const { results } = await env.DB.prepare(
"SELECT * FROM books LIMIT ?, 10",
)
.bind(searchParams.get("index"))
.all();
dbresults = results
}
if (pathname === "/api/select_book_by_index"
&& searchParams.has("index")
){
const { results } = await env.DB.prepare(
"SELECT * FROM books LIMIT ?, 1",
)
.bind(searchParams.get("index"))
.all();
dbresults = results;
}
if (pathname === "/api/select_book"
&& searchParams.has("book_name")
){
const { results } = await env.DB.prepare(
"select * from books where name like ?;",
)
.bind(`%${searchParams.get("book_name")}%`)
.all();
dbresults = results;
}
if (pathname === "/api/precise_select_book"
&& searchParams.has("book_name")
){
const { results } = await env.DB.prepare(
"select * from books where name = ?;",
)
.bind(`${searchParams.get("book_name")}`)
.all();
dbresults = results;
}
if (pathname === "/api/precise_delete_book"
&& searchParams.has("book_name")
){
const { results1 } = await env.DB.prepare(
"select * from books where name = ?;",
)
.bind(`${searchParams.get("book_name")}`)
.all();
if (results1 !== undefined) {
const {results} = await env.DB.prepare(
"delete from books where name = ?;",
)
.bind(`${searchParams.get("book_name")}`)
.all();
return Response.json({code: 200, data: null, msg: "删除成功"}, {
headers: headers
});
}
else{
return Response.json({code: 500, data: null, msg: "未找到书籍"}, {
headers: headers
});
}
}
if (pathname === "/api/insert_book"
&& searchParams.has("name")
&& searchParams.has("author")
){
try {
const {results} = await env.DB.prepare(
"insert into books (name, author, publisher, keyword, excerpt, category_num, publish_date) values (?, ?, ?, ?, ?, ?, ?)",
)
.bind(
searchParams.get("name"),
searchParams.get("author"),
searchParams.get("publisher"),
searchParams.get("keyword"),
searchParams.get("excerpt"),
searchParams.get("category_num"),
searchParams.get("publish_date")
)
.all();
return {
code: 200,
data: null,
msg: "插入成功"
};
}
catch (error){
return {
code: 500,
data: null,
msg: "插入失败,请检查参数"
};
}
}
if (dbresults.length !== 0){
const responseObject = {
code: 200,
data: dbresults
};
let resp = Response.json(responseObject, {
headers: headers
});
return resp;
}
else
return Response.json({code: 404, data: null}, {
headers: headers
});
},
}
api接口如下
endpoint | description | params | method |
---|---|---|---|
/api/select_all | 查询所有书籍 | index=123 | GET |
/api/select_book_by_index | 根据书号查询书 | index=xxx | GET |
/api/select_book | 模糊查询书籍 | book_name=xxxx | GET |
/api/precise_select_book | 精确查询 | book_name=xxx | GET |
/api/precise_delete_book | 精确删除书籍 | book_name=xxx | GET |
/api/insert_book | 添加书籍 | name, author, publisher, keyword, excerpt, category_num, publish_date | GET |
前端静态网页设计
前端真不熟,基本靠gpt
主要就是解决表格渲染问题(查询书籍)
写完以后直接上传page,然后分配即可
展示环节
最终预览请移步
视频展示